According to the official storyline, Section 230—part of the Communications Decency Act of 1996was designed to encourage platforms to give free speech a place to thrive. Why has this not happened? Why are conservative voices cut off so often, and the consequences of even marginally controversial speech so high?

Section 230 reads as follows: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This text is built on the assumption that services like Facebook and Twitter are services that merely provide a space in which others can speak—they are platforms, the high-tech equivalent of a stage or a wooden box in a public square. Anyone can get on it and make a speech. If someone, say, threatens another person’s life while standing on the platform, they alone are responsible for that speech. The platform is not—the platform itself is neutral.

For those who believe these services are this kind of platform, Section 230 should be reformed to make these services more like the wooden box in the corner of public space—closer to perfectly neutral, making it closer to the status of a public utility. However, because courts have clearly said these companies have a free speech right to filter content they find harmful to their business, moving toward minimal filtering and complete neutrality is likely impossible.

Many progressives are happy to discard the image of the neutral platform, reasoning that because these services can filter at scale, they should filter at scale—to create a “healthier town square.” Progressives threaten to reform the law so these services can be forced to filter content the government finds harmful. The more libertarian response is that these services should be forced to be transparent about their filtering (or editorial) decisions.

Start your day with Public Discourse

Sign up and get our daily essays sent straight to your inbox.

Neither of these responses will solve the problem. The free speech promise of Section 230 is not working, and it’s time for everyone to insist on reform. But if we want that reform to actually be meaningful, we have to start by asking whether the “platform” versus “content creator” dichotomy can really stand. Are social media services really platforms in the sense considered in Section 230, or are they some sort of “new thing” that needs a new form of regulation?

The free speech promise of Section 230 is not working, and it’s time for everyone to insist on reform.

 

How Content Filtering Really Works

To answer this question, it’s best to begin by considering how content filtering really works.

First, forcing these services to be completely transparent about how and what they filter is likely impossible, both technically and culturally. Technically, almost no one understands how the filters these services use work. Even if you could expose the entire filtering process publicly, it would be indecipherable to the average user.

Second, in the popular imagination (including the imagination of many lawmakers, researchers, and intelligent people), filtering means that when a user posts something, it is either permitted on the service or not. In reality, content filtering is not so clear-cut. Just consider the many different ways a service can quietly promote or suppress a particular piece of content. Shadowbanning is an extreme case of this—allowing a user to post but refusing to distribute it entirely. More subtly, a service may allow a user to post content and then only distribute it to a tiny group of people. Services also decide where to place information in the user interface and when to distribute a user’s post. By placing a user’s post halfway down the page rather than the top, the service can de-emphasize the content.

Third, these services often take advantage of the time value of information, making a “mistake” by blocking some post or account before an election and issuing apologies afterward. Choosing when to publicize a post is just one more form of filtering and information manipulation that should alert us to these companies’ status as content creators.

Finally, these companies do not want to divulge their filter rules. These rules are “baked into the service,” and hence part of the value handed to investors in the form of profit. Further, the companies want to change the rules constantly (through continuous improvement or agile methodology). It is hard to correlate events on a timeline against continuously changing filters.

Facebook and Twitter are not platforms even in the sense in which they use that word. These platforms might not create content directly, but they do create content indirectly through filtering. At some point, if you have enough raw material to work with, the law of large numbers takes over, allowing you to create a story even if you are not writing any of the words, sentences, paragraphs, or chapters.

 

Not Platforms—Content Creators

Creating practical remedies requires going back to the intent of Section 230—to allow platforms to operate without fear of being held responsible for content generated by individual users. This intent cuts to the heart of what social media companies believe about themselves. According to this way of looking at things, the service is not responsible for the content, because they did not create it. Simultaneously, however, the service may filter and shape the information created by others in any way they like, because they are the medium through which that content is created and shared.

However, this definition of platform does not get social media services “off the hook” in terms of Section 230’s protections. To understand why, let’s begin someplace different, with one of the Internet’s favorite riddles. If you had an infinite number of monkeys, each of which had its own typewriter, and each of these infinite monkeys had an infinite amount of time in which to type, would they produce the works of Shakespeare? The answer, of course, is no. The monkeys would not produce Shakespeare’s works for lack of typing, but rather because no Shakespeare-shaped filter was applied to the monkey’s output. If you could put a Shakespeare-shaped filter in front of the monkey’s output, you would, indeed, get the works of Shakespeare. When working with infinite input, the filter is doing the creating, not the many monkeys manipulating keyboards.

Large-scale social media companies have an almost infinite set of inputs to work from and can filter this input to shape what they want the world to see. Hence, Facebook and Twitter are not platforms even in the sense in which they use that word. These platforms might not create content directly, but they do create content indirectly through filtering. At some point, if you have enough raw material to work with, the law of large numbers takes over, allowing you to create a story even if you are not writing any of the words, sentences, paragraphs, or chapters.

Does this mean these companies should be wholly responsible for the speech on their platforms? Trying to answer this question through the lens of the platform versus content provider dichotomy will always result in a bad answer. These companies seem to be a “third thing” which does not fit into the framework of Section 230. Pushing the false dichotomy of platform versus creator, however, gives these companies more power; they can hide behind their status as a platform, claiming their filtering is perfectly neutral. The complexity and opaqueness of their filters makes it almost impossible to see their bias.

Social media companies seem to be a third thing which does not fit into the framework of Section 230. Pushing the false dichotomy of platform versus creator, however, gives these companies more power; they can hide behind their status as a platform, claiming their filtering is perfectly neutral. The complexity and opaqueness of their filters makes it almost impossible to see their bias.

 

Toward Meaningful Reform

What kind of reform would address this problem? A combination of platform neutrality beneath these services, encouraging alternative services to grow and compete, and a genuine attempt at making it possible to understand the shapes of the filters each service uses.

What does platform neutrality beneath these services mean? First, it means that these platforms should not be allowed to alter the physical infrastructure of the Internet to improve or promote their services. When the Internet first started, it was designed to be a fully decentralized system. The network was “dumb” and the hosts “smart.” Anyone could set up a web server, an email server, a “bulletin board,” or any other service. These services would attract as much traffic as the value they brought to the larger world, and their owners could run them for financial or non-financial reasons—whatever they chose to do.

That dream has been utterly shattered by the overgrowth of a few extensive services. The predominance of Google, Microsoft, Facebook, Amazon, and a few others has reshaped the internet, so it is increasingly centralized. Almost all data flows to a handful of companies, each of which owns a handful of data centers (or clouds). Where data flows, cables and infrastructure follow; once the cables and infrastructure are built, it will take many years to undo the centralization of data and computing power. Some possible actions here might include forcing these companies to store data locally rather than centrally, and to support the growth of a truly distributed infrastructure.

Second, it means that hosting services should be considered the real platforms. Thus, they should be required to be completely neutral. Amazon Web Services should not have “terms of service” allowing them to shut down a service like Parler. If Parler is breaking the law, that should be addressed by the proper authorities—cloud providers should not be forced, expected, or allowed to deal with the situation (other than by helping the appropriate authorities where it makes sense).

What about encouraging alternative services to grow and compete? The steps above—forcing these large companies to pay to rebuild a truly decentralized infrastructure and true platform neutrality for the next layer down—would be a good start. Beyond this, local governments should encourage the growth of services serving their specific area.

Of course, this means users would need to learn how to navigate multiple services to get the information they need. For instance, the average person might need to access one service to connect with people in the local community, another to connect with people in their state, and a third (or many more) to connect with people in a shared interest group, or childhood friends, etc. This might, at first, seem to be a problem for users who prefer a unified experience—but is this different from the situation today? There are already hundreds of services, none of which reaches every possible user. Already there are services for photo sharing, voice chat, business-oriented information, etc. The promise of something like Facebook reaching every person in the world will never be achieved, and the perils of such a system might be too great anyway.

Distributing user information in smaller bits across multiple services would also be a boon to user privacy, preventing a small number of companies from gaining huge influence over our lives.

Making Filters More Transparent

Finally, it’s time for a real attempt at making it possible to see how companies like Facebook filter the content they are given, for free, by their hundreds of millions of users. There is no perfect way to do this, but one good option might be providing a research portal to those filters.

The way this might work is this: each company with more than some number of users would be required to provide a portal through which a researcher can insert “generated posts.” These posts would be filtered through the same process the service uses to filter actual user posts and return a number between 1 and 100. If the “generated post” returns a 1, the researcher could assume that post would be blocked. If it returns a 100, the researcher can conclude the service would promote the post in every way it can. Such a service would never be perfect, but it would be a better start at transparency than “community standards,” or any other such thing. Services owners might complain that this opens them up to being gamed or losing their “secret sauce,” but this would be the price they would need to pay to retain any semblance of Section 230 protection.

The role of Section 230 in creating the disaster we call the “digital town square” cannot be overemphasized—it is a gift to service owners on which they have grown rich and powerful beyond imagination. It’s time to find realistic ways to bring these companies under control. What we cannot allow is for some faux reform to take place—enforcing some new set of rules that will only allow these same companies to catapult to the next level of power and financial success while pretending to solve the problem.