DOWN THE RABBIT HOLE: SOCIAL MEDIA’S EMBRACE OF POSITIVE FEEDBACK

We’ve all fallen down an online rabbit hole. You’re scrolling through your news feed and you click on a video that looks mildly interesting. Scrolling down further, a similar post appears, although its language is a bit edgier. When you click on a link someone has left in the comments section, this takes you to a website devoted to this same topic, although in full-on hyper-partisan mode, making no pretense of objectivity. You continue following the links you find on that site, until… suddenly, it’s two hours later and you’re reading some bizarre conspiracy theory on some topic you had never even thought of before, wondering how on earth you got there.

This week, the Progressive Worldview Blog continues its multi-post series on The Most Important Technical Distinction in the World, the distinction between positive and negative feedback. This topic that first occurred to me as I was reading The Chaos Machine by Max Fisher, which offers a disturbing look at how the major social media platforms have been tweaking their algorithms over the past fifteen or so years to maximize user engagement, though at the cost of driving an explosion of online disinformation and vitriol.

I was particularly interested in one of the reasons Fisher cites for why the owners of the major social media platforms have been so reluctant to take meaningful steps to curb the false and inflammatory claims now commonplace on their sites. Beyond the potential financial cost such measures might entail, Fisher argues, these tech entrepreneurs are dyed-in-the-wool believers in a Silicon Valley ideology holding that, “More information is better,” with the corollary being that, “More user engagement is better.” This ideology concedes that open online platforms will attract some bad information. But the answer to bad information, it insists, is good information, so if you open up your platform even wider, a tidal wave good information will swamp the bad information, thus ultimately serving the social good by bringing the world both more information and better information.

Last week we considered one online platform that has lent this ideological conviction some credence, Wikipedia., the online encyclopedia that allows users to post articles on any topic imaginable, while allowing other users to correct any errors they find. As its designers foresaw, the content posted on Wikipedia is not always true or socially responsible. Sometimes contributors make honest mistakes, and other times bad actors will intentionally post false or vitriolic content. By crowd-sourcing the editing process, however, Wikipedia establishes a negative feedback mechanism that tends to draw the information back from what I termed the “information fringe” and toward the “information mainstream.” With most people wanting the content they consume to be true and to align with widely shared social values, that is, Wikipedia allows this large body of responsible users to keep the platform in a mainstream zone of truth and moral acceptability by quickly editing out any fringe information that gets posted.

The fact that Wikipedia has been so successful in amassing a huge quantity of high quality information would appear to confirm the Silicon Valley dogma that more information is better, and that more user engagement is better, since the large amount of good information posted on Wikipedia generally succeeds in swamping out the smaller amount of bad information. Two points, however, are worth noting here. First, Wikipedia has always functioned as a non-profit organization. Second, Wikipedia’s operators have very deliberately fostered the negative feedback mechanisms that keep its content in check. As Fisher makes clear, the situation has been very different in the for-profit world of social media, where a steady increase in the use of positive feedback to drive up user engagement and profitability has generated a very different information dynamic.

In the space of a single blog post, I cannot pretend to adequately summarize the evolution of the social media industry; readers wanting more detail are encouraged to read The Chaos Machine. Here I will just note a few key developments, mostly using Facebook as an example of what has happened across the social media industry. These developments mark key milestones the industry’s drift away from the information mainstream and toward the information fringe, largely driven by the fact that its algorithms have increasingly come to generate a number of positive feedback mechanisms, leaving any meaningful form of negative behind.

When Mark Zuckerberg first launched Facebook in 2004 as an online platform that would allow college students to keep in touch, it operated on the My Space model, where everyone maintained their own homepage, so if you wanted to see what your friends were doing, you had to manually visit their pages. This was cumbersome, so many users welcomed the introduction in 2006 of the news feed. Now, whenever one of your friends posted what they had for dinner or what bar they were partying at, this would appear in your news feed, making it much easier to keep tabs on one another.

This type of social network did allow some content go viral, or to spread in the exponential fashion that indicates the emergence of a positive feedback loop. Assume one of my friends shares a particularly humorous cat video. Tickled, I might re-share this video with my other friends. If 10 of these friends then re-post the video, and each of them can get 10 of their friends to re-post it, etc., the content will soon explode across the internet. When content goes viral in this fashion, it can shape information we all see in our new feeds: we are all likely to receive the same popular posts everyone else is seeing and sharing. In the early days of social media, however, this phenomenon was fairly benign, in the sense that it tended to keep content within the information mainstream—more or less true and morally inoffensive. Most users, after all, occupy the information mainstream, so this is the kind of content they want. And most mainstream users will mainly have friends who are in the mainstream. To go viral, therefore, a particular piece of content would need to be fairly mainstream, or it would die out from lack of re-posting.

The next innovation came around 2012 as Facebook made a conscious effort to expand the size of the social networks to which users belonged. To this point, most users had organically limited their networks to somewhere around 150 friends, given that our brains did not evolve to keep track of many more close acquaintances than this. Hoping to keep users scrolling through their feeds for longer stretches, however, Facebook began including certain posts that came from outside a user’s personal network. Now I might encounter posts, not just from friends, but from friends of friends. And with the platform being able to track which posts I click on to watch some video or follow some link, this slowly morphed into the platform inserting posts into my feed from users with whom I did not share any personal connection, but who were posting content in which I might be interested, based on my clicking history.

From Facebook’s standpoint, this was a powerful new strategy, given how its business model works. Like all the major social media platforms, Facebook typically posts new ads with every click. This allows the platform to bill its advertisers by the click, thus giving it a tremendous incentive to generate as many clicks as possible. And by tracking the content have previously clicked to determine my interests, Facebook could fill my news feed with similar posts in which I might also be interested, thereby raising the chances I would click on one of them. Of course, Facebook touted this innovation a valuable new service it was providing for users, bringing them information in which they were interested, but which they otherwise might never have stumbled across. And this may have actually been true at the time, producing a win-win situation.

In fact, the resulting dynamic may have been win-win-win. The social media platforms were making boatloads of money. Users were receiving large amounts of information in which they were interested. And the public interest was still probably being served by the fact that the prevailing dynamic tended to keep most content posted on social media somewhere in the ballpark of being true and morally unobjectionable. Most users, again, naturally occupy the information mainstream, which means they will not only have mainstream friends, but also mainstream interests. Given their clicking histories, therefore, the social media platforms were probably feeding this very large group of users content that was mostly true and morally unobjectionable. True, the practice of pushing suggested content on users probably meant the smaller number of people who occupied the information fringe began receiving more fringe content. But since these people were stranded out on the social fringe, the content they consumed did not have much of an impact on the larger society. This, however, would soon begin to change.

The next shift in strategy on the part of the social media platforms did not take place overnight. It was more of a gradual process, starting somewhere around the mid-2010’s and slowly gathering steam, continuing to the present day. Facebook and the other platforms have been quite opaque about this particular development, so many of it details remain unclear. In a broad sense, however, two thing began to happen.

First, as platforms like Facebook continued to refine the algorithms they used to recommend content to users, they began changing the precise question these algorithms were written to answer. Originally, this question was, “Given a user’s clicking history, what sort of content would this person be interested in seeing next?” Slowly, however, this morphed into a slightly different question: “Given this user’s clicking history, what sort of post would they be most likely to click on next?” Of course, the whole point of building algorithms to answer the first question had always been to generate as many clicks as possible, with the engineers writing these algorithms naturally assuming that people will be most likely to click on posts featuring topics of interest to them. Over time, however, it began to become clear that this assumption is not entirely correct. What really piques our interest, it turns out, are not topics like gardening, football, or even party politics. What really stirs the juices are emotions like fear, anger, incredulity, titillation, and moral outrage. Given this quirk of human psychology, it follows that, even if I am an avid gardener with little interest in football, the next post I click on may not be about how to grow a better chrysanthemum—provided the post above it notes that a famous NFL star was caught using kittens as sex toys.

The second development that took place around this same time is that machine learning grew powerful enough that computers could largely take over the task of writing the algorithms used to steer content to users. Applying deep learning to huge databases of user behavior, computers can develop far better predictions than any human engineer as to what sort of content a given user is most likely to click on next. Computers are further unhindered by something that can hamstring human engineers as they attempt to generate as many clicks as possible: a conscience. A human engineer who discovers, for instance, that racist dog whistles are likely to attract more clicks than endearing cat videos may nonetheless decline to write an algorithm promoting dog whistles on the grounds of basic decency. To a computer, however, this particular calculation is no different than any other instance of, “If A > B then C,” so it will go ahead promote the racist content. Indeed, once this new generation of algorithms was in place, it began fostering at least three different positive feedback mechanism that started pushing much of the content posted on social media away from the information mainstream and in the direction of the information fringe.

The first of these dynamics was an immediate product of the fact that individual users were starting to find more shocking, divisive, or scarcely believable content in their new feeds, given that an algorithm thought they might be tempted to click on it. Recall that most people—almost by definition—occupy the information mainstream, meaning they would generally like the information they receive to be true and to align with broadly accepted moral standards. Accordingly, as long as their feeds contained nothing but posts from their friends or about their existing interests, these users tended to keep themselves in mainstream territory. Human psychology being what it is, however, it is difficult for anyone to resist clicking on a link that stirs one of our primal emotional responses, whether this be fear, shock, outrage, or triumphalism. And this initial click may actually yield some fairly mild content; the algorithms have learned that users will not generally jump from cat videos to stories about pedophiles infiltrating the government and murdering their political rivals. But that first click can initiate a trip down the rabbit hole, as the algorithms begin ramping up the provocative nature of the content recommended, even as the user starts to grow desensitized to it, hence requiring content that is more and more outrageous to generate that same, strangely pleasurable emotional jolt. And thus, two hours later, you find you are reading that article about the pedophile assassins. And oddly, it now does not sound so implausible…

With the first positive feedback loop thus pushing individuals users out toward the information fringe with content that is more and more outrageous, this sparks a second, complimentary positive feedback loop. One factor social media algorithms have always considered in pushing content is the number of clicks a given post has already received. If a post has already been viewed by thousands or millions of viewers, there must be something appealing or intriguing about it, making it more likely the next user will click on it; hence, popular posts tended to be pushed by platforms, thus making them even more popular. Users on the information fringe had long been sharing fringe content with their friends and fellow travelers, but with such a small percentage of the population sharing their fringe outlook, the content they posted had little chance of winning enough clicks to get recommended to users beyond their personal networks. As the new algorithms began pushing mainstream users toward more divisive, dishonest content, however, these fringe posts began to win more clicks, so the algorithms began recommending this fringe content to even broader, more mainstream audiences—thereby winning it more clicks, and thereby more recommendations, etc. And thus the line between fringe and mainstream began to blur as ever more questionable content began to make its way to mainstream users.

If this increase in the visibility of fringe content was generally welcomed by fringe users, it also spurred an intense new competition for clicks and attention within the fringe community. Thus began what Fisher describes as an “arms race” among fringe content providers, each striving to be more outrageous, titillating, or enraging than the next in order to win the attention of social media users who were both getting inundated with fringe information and growing more desensitized. Accordingly, as the total quantity of information being disseminated on social media continued to rise, its average quality went down—at least if we continue to insist on that quaint old standard of information being better if it is more or less true and generally aligned with mainstream social values.

This has been a very brief look at the evolution of social media over the past 15 years or so, but it gives us enough to go back to the Silicon Valley ideology holding that more information is always better, and that more user engagement is always better, on the grounds that if you have enough information out there, the good information will tend to crowd out the bad. This is the ideology, again, that Fisher suggests has given the owners of the major social media platforms a blindspot with respect to the real damage their platforms have been causing by increasingly promoting disinformation and divisiveness. I think we are now in a position to add that this blindspot has been accentuated, if not allowed to arise in the first place, by a failure to properly distinguish between positive and negative feedback.

When the owners of social media platforms tout the mantra of “more information is better,” I would venture, they in mind have a platform like Wikipedia. As we have seen, Wikipedia quite intentionally built its entire platform around negative feedback, first and foremost by allowing users to correct one another’s errors. This has had the effect of reining in any content that happens to stray toward disinformation or divineness, pulling it back in to the information mainstream. Or if we invoke the visual metaphor introduced in prior posts of a rock on a hillside, Wikipedia has carved out a large basin in which its rock can sit, with the basin’s trough representing the information mainstream. Should the rock get pushed up one of the sides of the basin—toward the information fringe—the platform’s dynamics will naturally draw the rock back down to the mainstream trough. In this scenario, more information is better, as is more user engagement, since the large number of mainstream users who want to view mainstream content ultimately provide the gravity that pulls the rock back down.

Of course, Wikipedia has always operated as a non-profit organization, so it has not been subject to the financial pressures that might tempt it to stray from its founding mission of providing, not just large quantities of information, but also high quality information. The opposite has been true in the for-profit world of social media, where generating ever more clicks and ever more advertising dollars is an existential imperative. And thus, as we have seen, whatever noble social goals the owners of the major platform started out embracing, the drive to engage more users for longer stretches of time has led them to retool their algorithms in ways that generates multiple forms of positive feedback. And while positive feedback is not always bad—sometimes heartwarming or socially significant stories can go viral—positive feedback tends to be highly unstable and unpredictable. And in the case of social media, the unintended effect of last round of algorithmic changes has been to generate a dynamic shaped more like a mountain peak than a basin. In an inversion of the Wikipedian dynamic, that is, the peak now represents the information mainstream, with the valleys below forming the information fringe. As a result, even the user who starts out in the mainstream is just one push—just one provocative post—from being launched down a slippery slope toward a morass of disinformation and divisiveness.

In the realm of social media, therefore—at least over the past decade—the Silicon Valley ideology of “more information is better” has not been borne out by the facts. True, the drive to increase user engagement has driven up the total quantity of information shared on social media, thus producing to more clicks and advertising dollars. But the strategy the social media platforms have employed to drive up engagement has significantly eroded the quality of information made available in the public square, and this has led to many negative social outcomes, ranging from ethnic violence to vaccine skepticism to election denialism. What has been good for Facebook, YouTube, and Twitter (X) has not been good for the country or the world, as blasphemous as this pronouncement may sound in Silicon Valley.

So what can we do about this? How can we persuade the social media platforms to draw themselves back from their current course and adopt something more like the Wikipedia model in the interest of the public good? I don’t claim to know. I’m far from being an expert on social media, and the enormous amount of money involved makes it seem unlikely that any of us on the outside will be able influence the behavior of the social media giants. But if we can somehow convince Mark Zuckerberg or Elon Musk that the ideological ground upon which they claim to stand has collapsed beneath their feet, given the variety of feedback now driving their platforms, perhaps this will be a first step.

Next
Next

IS MORE INFORMATION BETTER? THAT DEPENDS…