IS MORE INFORMATION BETTER? THAT DEPENDS…

When I launched the current multi-post series on The Most Important Technical Distinction in the World several weeks ago, I mentioned that I first got the idea for this series when reading of The Chaos Machine by Max Fisher, a disturbing book that examines how such social media platforms as Facebook, YouTube, and Twitter have be reformulating their algorithms over the past two decades to maximize clicks, often at the expense of the public good. Over the next few posts, I’ll be coming back to this book, focusing on the particular argument Fisher makes that got me thinking about positive and negative feedback—and about how a failure to distinguish between the two can lead to bad social outcomes.

In its narrative arc, The Chaos Machine traces the path by which an exciting new tool for keeping up with former classmates slowly morphed into a major conduit for spreading online disinformation and vitriol, at times sparking real-world outbreaks of social unrest and violence. What particularly caught my eye were two of the reasons Fisher cites for why the leaders of the major social media platforms—their technical brilliance and generally good intentions notwithstanding—have seemed incapable of appreciating just how much potential for harm their platforms contain, thus helping explain why their efforts to limit the disinformation and hate speech appearing on their platforms have been lackluster, at best.

The first of these reasons, unsurprisingly, is money. As platforms like Facebook and YouTube (owned by Google) grew into some of the most powerful companies in the world, they have sought—like every other for-profit business—to maximize their profits. Understandably, this has led their leaders to look for innovative ways to make yet more money, while resisting calls to change their business model in ways that could threaten their profits, even if this might promote the greater social good. The second, less obvious reason Fisher cites for why industry leaders like Facebook’s Mark Zuckerberg, Google’s Sergey Brin, and Twitter’s Jack Dorsey have seemed to have a blind spot for the potential harms their platforms can cause is precisely because these pioneering innovators are concerned about the greater good. In fact, Fisher argues, it is not just industry leaders, but almost everyone who goes to work in Silicon Valley does so on the basis of a shared set of ideological convictions, which we could call the Silicon Valley ideology.

According to this belief system, the Information Revolution of the late twentieth century, occasioned by the rise of digital computers and the internet, has been one of the greatest things ever to happen to humanity, given that it has placed more information at the fingertips of average citizens than was available to most erudite scholars of past generations. The rise of social media then carried this revolution into the twenty-first century by dramatically expanding the information-sharing networks to which we all belong, thus allowing us to learn, not just from the friends and teachers we know personally, but from people around the world. And if this explosion in the amount of information to which we now have access has been a good thing, it follows that finding ways to bring us even more information is better. Indeed, even if the increasingly prominent role social media now plays in everyday life has led to some unintended negative consequences, the way to combat these undesirable outcomes is through more information, not less. And thus the ideological reluctance on the part of the social media platforms to curb what their users post. Conveniently, this ideological stance aligns with the financial incentive these platforms have to maximize their profits, but it is a heartfelt belief nonetheless.

In this week’s post, I will more carefully explore this Silicon Valley conviction that “more information is better,” citing one shining example of an online platform for which this dynamic would appear to be true—not, incidentally, a social media platform. Next week, I’ll return to social media, considering the somewhat different dynamic it has evolved, with results that are more socially questionable. By this point in our series on The Most Important Technical Distinction in the World, it will come as no surprise that the difference I want to highlight is between a dynamic that gains stability by making deliberate use of negative feedback and one that tries—with varying degrees of success—to harness the explosive power of positive feedback.

*   *   *   *   *

Few of us would dispute that the increased access we now enjoy to information of all types has, on balance, improved the quality of our lives. To cite one example, a generation or two ago, medical information was relegated to medical textbooks or professional journals, so if you did not have a medical degree, you were completely dependent on a doctor who might be juggling hundreds of patients to determine what was wrong with you and what treatments might help. Today, in contrast, any of us can get on WebMD or even Google to research our symptoms, and thus we can use the ten minutes we get with our doctor to have an informed discussion rather than simply accepting whatever snap judgment this authority figure might offer. I can imagine this new assertiveness on the part of patients sometimes getting annoying for doctors, but I think even most doctors would agree that the general increase in the availability of medical information has, on balance, been a good thing for both patients and doctors.

If we generalize from this particular case, we arrive at the baseline claim associated with the Silicon Valley ideology: “More information is better.” It is not difficult to see, however, that a mere quantitative increase in information could also lead to problems. For one thing, the more information we receive, the harder it gets to sift through. And if we are flooded with information, some of it will inevitably be of poor quality—some of the “medical advice” we find on Google will be pure quackery, proffered for no other reason than to sell snake oil. Accordingly, if an increase in the sheer quantity of information made available can be helpful, its benefits will be undermined if true and false information are mixed indiscriminately. What would be truly be useful, therefore, is an online platform that could raise both the quantity and quality of information entering the public square. In Silicon Valley, one platform that has pulled off this feat in stellar fashion is Wikipedia.

Most of us have been on Wikipedia and know roughly how it works, but a brief review of its history and operation will be helpful. Wikipedia was launched in 2001 as an online encyclopedia, with the novel twist being that its content would be crowdsourced. From the beginning, Wikipedia has operated as a non-profit organization, soliciting donations from users to run its servers and maintain its small professional staff, with most of the legwork required to grow and maintain what is by now a gargantuan encyclopedia being performed by volunteer editors and the pubic at large. Anyone is free to get on Wikipedia and log a new entry on some topic of interest to them, with detailed citations being strongly encouraged to help ensure that any information being posted is accurate. Other users can then read the article, and if they have something to contribute to the topic, they can either add to the article or correct any errors they find. Teams of volunteer editors oversee this whole process, immediately taking down any information that is obviously false, incendiary, or designed to sell some product. Generally, however, it is individual users who police the content they encounter, with an entry on Star Trek that suggests Han Solo was one of its main characters being sure to receive a correction within minutes.

With this basic account of Wikipedia’s operation on the table, we can see how this platform would be able to increase both the quantity and quality of information it makes available to users. Given its electronic format, Wikipedia is not limited to 15 or 20 volumes the way print encyclopedias used to be. And with most of its content coming from a worldwide army of unpaid contributors, budgetary restrictions do not limit how many articles can be written. This huge number of contributors, moreover, already begin to improve the quality of information that can be expected in most articles. With users being most likely to contribute to topics they know well, the Wikipedia article on cyanobacteria may well be written by some of the world’s leading authorities on cyanobacteria, rather than by the sort of generalist who might have once gotten a job writing science articles for a commercial print encyclopedia.

These quantitative improvements that a crowdsourced online encyclopedia can offer being noted, the real innovation Wikipedia pioneered, leading to dramatic improvements in the quality of information it hosts, stems from the opportunity users have to correct errors. As will be clear if you’ve read my earlier posts on the workings of the modern science or traditional journalism, this sets up a negative feedback mechanism that tends to rein in any claims that stray too far from the objective truth. In the case of Wikipedia, the comparison with journalism is most apt. As we saw, when a journalist is reporting a story, she must first try to confirm the truth of any facts she uncovers, ideally by interviewing multiple sources. The reporter will then pitch the story to her editor, who will attempt to poke holes in it. If the story is then published, other reporters or news organizations may try to refute it, publishing any evidence they can find that would show the original reporting was wrong. The system is messy, but with any claims that get reported generally having to survive at least three levels of scrutiny from multiple parties with different interests, any blatantly false reports will generally get winnowed out, leaving the reporting that remains to gravitate toward the objective truth.

The process Wikipedia utilizes is similar: individual contributors are strongly encouraged to cite one or more sources when writing or revising an article, volunteer editors can reject content they deem to be untrustworthy or in violation of other Wikipedia standards, and an unlimited number of other readers can then identity and correct any errors they find. The obvious way in which Wikipedia differs from either science or journalism is that the function of critiquing and correcting is not limited to the members of one particular profession. On Wikipedia, literally anyone with enough interest in a particular subject to be able to spot errors can make corrections. Of course, with non-professionals being involved, sometimes “corrections” can themselves be wrong, whether as a result of error or because a user is intentionally posting false content with some ill intent. In either case, however, the revisions are themselves open further public scrutiny and revision, so in theory, the content posted on Wikipedia will tend to move in the direction of objective truth. This theory depends, however, on one key assumption.

The assumption in question is that the vast majority of people using Wikipedia belong to what we could call the “information mainstream.” These are people who, if they are looking up information, would like for the content they find be true. They would prefer, in other words, to encounter genuine information rather than disinformation. People in information mainstream, moreover, will generally share the moral values of their broader society. Of course, moral values can be diverse, even within a particular society, but general agreement can be found in most contemporary societies for such broad moral claims as, “Mass murder is bad.” If this is a claim to which you can subscribe, you are probably in the mainstream.

The information mainstream can be contrasted with the “information fringe.” People on the information fringe may not really care whether the information they consume it true. Alternately, they may be willing to believe certain claims on the basis of very little solid evidence, or even in the face of a great deal of contrary evidence. People on the information fringe, moreover, may subscribe to moral values starkly different from those of the mainstream. This does not have to be true; someone who believes strongly in UFO’s may be perfectly moral in their beliefs and actions, judging by mainstream standards. But an example of a person who reflects a fringe stance both epistemologically and morally would be a Holocaust denier. Someone who goes around claiming that the Holocaust was faked may or may not genuinely believe this to be true. Regardless, this person will probably not believe that getting their historical facts straight is paramount, with their deeper belief being that, even if Hitler did kill millions of Jews, this was not such a bad thing. 

With this distinction on the table between the information mainstream and the information fringe, the success of a platform like Wikipedia rests on the assumption that the vast majority of its potential users will belong to the information mainstream, as compared to the relatively small number belonging to the information fringe. If we stick with the example of Holocaust denial, this assumption would appear to be justified. I have not looked at any research data lately, but I’m willing to guess that if you were to survey a random group of 100 Americans, a large majority—maybe 95, maybe more?—would tell you that they believe the Holocaust was real, that it killed around 6 million Jews and others, and that it was a horrific chapter in history.

If my statistical guess is anywhere close to right, this would imply that our sample group contains as many as 5 Holocaust deniers. And it is entirely possible 1 of them will be both mean-spirited and savvy enough to get onto Wikipedia and try to revise its Holocaust article in a way that casts doubt on this historical event. In this scenario, however, 95 people will find the revised information to be both obviously false and morally repugnant. Granted, not all of these people will be visiting Wikipedia on any given day, and of those who stumble across the false information in the Holocaust entry, most will not feel passionate enough about the issue to go to the trouble of making a correction. Still, if you scale up to the hundreds of millions of people who actually use Wikipedia, then even though this also increases the number of Holocaust deniers who could potentially plant disinformation, this fringe group will still be outnumbered by mainstream Holocaust believers by at least 20 to 1. And within this mainstream group, we can be confident there will be a substantial number of people who care very passionately about this issue, perhaps because they lost family members in the Holocaust. These people will be likely to monitor Wikipedia and quickly take down any denialist disinformation that gets posted.

And thus, more information would, in fact, appear to be is better. Or more specifically, more people contributing more information is better, provided that most of the users involved belong to the information mainstream, whereas only a few belong to the information fringe. True, an open platform that allows anyone to contribute will allow fringe users to insert false or vitriolic claims. But if the general population is as lopsided in its composition as have assumed is true, then the large number of mainstream users who are interested in truthful, morally responsible information will simply swamp the small number of fringe users who might attempt to post false or inflammatory content. And the tremendous success Wikipedia had enjoyed over the years would appear to demonstrate that the assumption is, in fact, correct—that mainstream users do outnumber fringe users by a large enough margin that such open internet platforms Wikipedia will gravitate in the direction of sharing truthful, morally responsible content.

And thus the Silicon Valley ideology would appear to be vindicated. More information does appear to be better. And even if—as is inevitable—some of this information turns out to be bad, in a society generally composed of good people, the good information will tend to overwhelm and crowd out the bad. The answer to bad information, in other words, would appear to be more information, not less.

The crucial point to note here, however, is that Wikipedia has been successful in generating large amounts of high quality information because it deliberately employs multiple negative feedback mechanisms that actively work to root out false or morally objectionable claims, drawing the general conversation back from the fringes and toward the factually based, socially responsible middle. Or to invoke the image I introduced a few weeks ago of a rock on a hillside, Wikipedia has managed to carve out a large basin for the rock to sit in, such that if a strong wind comes along and blows the rock partway up the basin’s slope, gravity will draw the rock back down to its equilibrium position at the bottom of the basin. If a bad actor disrupts the platform by posting false content, that is, the overwhelming number of users who are more interested in true content will pull the platform back to point where the claims are in equilibrium with the objective facts of the world.

But what if our rock was not sitting in a mountain basin? What if it was instead perched atop a jagged peak? Which is to say: What if an online information-sharing platform was not was built around negative feedback, generally conducive to stability, but rather around positive feedback, with its potential for unleashing careening snowballs? We will consider these questions next week when we turn to the evolution of the major social media platforms, as recounted by Fisher in The Chaos Machine.

Previous
Previous

DOWN THE RABBIT HOLE: SOCIAL MEDIA’S EMBRACE OF POSITIVE FEEDBACK

Next
Next

FEEDBACK IN THE MEDIA: BALANCED TRUTHS OR SNOWBALLING LIES?