The term “internetwork” originally described a set of interconnected networks—a network of networks. The global Internet, the largest and best-known computer network, is no different—it is a network of networks. Because the Internet is a network of networks, no one owns the Internet. Each network that makes up the Internet is owned and operated by a private company, college, government, or other organization.

The composition of the Internet—a network of independently owned networks—is not an accident. The design of each Internet component is intentionally tuned to support a distributed system, a network without a central controlling authority. The distributed nature of the Internet allowed it to spread quickly across the globe and makes it hard to “shut down the Internet” at any national border.

Recently, however—within the last ten years—the Internet has been moving toward centralization. Rather than dispersing data and services throughout the world, content and connectivity are being concentrated into a small set of hands. This centralization is causing many in the engineering and Internet policy world to become worried.

It’s time for leaders in government and business to notice what is happening to the global Internet.

Start your day with Public Discourse

Sign up and get our daily essays sent straight to your inbox.
It’s time for leaders in government and business to notice what is happening to the global Internet.

 

The Centralization Process

In the original, decentralized Internet, most communication was directly between computers. When your computer accesses a website, it’s accessing a set of files on some other computer. Where that other computer is physically located isn’t all that important to the way the network works—so long as there is some physical path between the two computers, content can be transferred.

As the Internet grew, companies began to specialize in hosting (or creating) content, carrying (or transiting) data, or providing access to the Internet (edge providers), as shown in the figure below.

Transit providers specialize in carrying traffic between cities, nations, and continents using specialized long-haul optical fiber cables. These cables run just about everywhere. Most of the Internet traffic between continents (and sometimes between different places on the same continent) runs through undersea optical links.

However, in the last ten years or so, content providers—particularly social media and cloud computing companies—have moved into the Internet core. The result is an infrastructure that looks closer to the illustration below.

Most of the Internet’s traffic now flows through the networks of a few large companies, like Google, Microsoft, Amazon, and Facebook, rather than through transit providers. The Internet’s physical infrastructure is being reshaped to meet this new reality. Much as landscapers put sidewalks where the grass dies, network operators put physical links and infrastructure where the traffic flows.

The Information Cycle

Why is this centralization happening? It’s primarily a result of economies of scale, which have several elements. The “virtuous cycle,” impatience, the cost of infrastructure, and complexity all play a role in the increasing centralization of the Internet. Many of these are bolstered by specific government and inter-governmental policies.

According to Kai-Fu Lee,  a prominent artificial intelligence researcher and author of the book AI Superpowers: China, Silicon Valley, and the New World Order, information about people using a service is bound into a “virtuous cycle.” Readers of Public Discourse may object to calling this cycle of information, analytics, and consuming ever more attention “virtuous,” since it does not promote virtue in the classical sense. From the perspective of the AI developer and data scientist, however, the cycle can be considered “virtuous” because it gathers more information, allowing more precise and predictive models to be built.

The more information you have about people, the more information you can feed your machine-learning process to build detailed profiles about your users. Understanding your users means you can predict what they will like, what they will emotionally engage with, and what will make them act. The more you can engage users, the longer they will use your service, enabling you to gather more information about them. Knowing what makes your users act allows you to convert views into purchases, increasing the provider’s economic power.

The virtuous cycle is related to the network effect. The value of a network is exponentially related to the number of people connected to the network. The value of the network increases as more people connect, because the information held within the network increases as more people connect.

Who will extract the value of those data? Those located in the center of the network can gather the most information as the network increases in size. They are able to take the most advantage of the virtuous cycle. In other words, the virtuous cycle and the network effect favor a smaller number of complex services. The virtuous cycle and network effect drive centralization.

Impatience

Among the many indelible traits of humans is impatience. Without a strong outside force, such as guidance from an important community or an internal moral code, humans will almost always choose the fastest, least resistant path to reach any given goal. According to studies and anecdotal evidence, human impatience is growing worse each year. Some say, for instance, that things are so bad that goldfish have longer attention spans than the average person.

How do content providers, such as social media services, increase user engagement when impatience increases and attention spans decrease? One way is to make their service faster. While there are many ways to make a service faster, two are of particular interest here.

First, move content closer to the user. You cannot move information across a network instantaneously; it takes time to encode information out of (and into) memory onto wires (or optical cables). This encoding and decoding process must take place multiple times to transfer data across the network. Once encoded, data need to be carried from one location to another, a process regulated by the speed at which signals can move through the physical wire. Storing content closer to the user shortens the length of the trip between the server and the user, increasing the speed of the service.

Second, optimize the network path. Content providers optimize the network by shortening and improving the path between the service’s data centers and users. For instance, content providers can bypass the core of the Internet—transit providers and the transit ecosystem—by building out a path between the provider’s data centers and the point where users connect to the Internet.

Moving content to the edge and optimizing the network path requires lots of resources and expertise. Like most other things, the devices, physical cabling, buildings, and talent required to build large computer networks are less expensive at scale. Larger companies are more likely to achieve the scales necessary to reduce operational costs than smaller ones, favoring the centralization of services.

Complexity

We often complain about the complexity of technology in our everyday lives. Even the simplest things seem overly complex. Sometimes engineers do like to make things more complex than they need to be. On the other hand, complexity is generally a result of trying to solve challenging problems.

The problem of moving a massive amount of information of all kinds—from video to simple text files—between hundreds of millions of devices is difficult. Because this problem is so hard, the solutions are necessarily going to be complex.

Over time, as the Internet has grown, new regulations and ways of doing business have been added, and new applications have been added “over the top,” the complexity of Internet systems and protocols has increased. As with any other complex ecosystem, specialization has set in. Almost no one knows how “the whole thing works” any longer.

How does this drive centralization?

Each feature—or change at large—increases complexity. The more complex a protocol is, the more “care and feeding” it requires. As a matter of course, larger organizations are more capable of hiring, training, and keeping the specialized engineering talent required to build and maintain these kinds of complex systems. As the complexity of a subsystem increases, smaller organizations will move from building their own to buying these services from a provider. To achieve economies of scale, providers of these services will merge, and larger providers will buy smaller ones (or drive smaller ones out of business).

Larger organizations with specialized engineers find more use cases, increasing the feature set of the protocol, increasing the complexity of the protocol—in turn driving a stronger movement towards moving services to large organizations with specialized engineers. Thus, centralization drives further centralization.

Why We Should Care

One of the controversies surrounding the big technology companies—like Facebook, Microsoft, and Google—is the amount of power they have gained in culture because of their central position in distributing information. The first reason we should care is that the increasing centralization is not just about the distribution of information. Data centralization is being reflected in the Internet’s physical infrastructure.

Centralization is also reflected in the creation and continuing management of Internet protocols. Protocols are designed and standardized in communities—communities over which larger (and wealthier) organizations naturally have more control. Organizations will always bend standards to their advantage. Increased complexity and other protocol modifications can give those same larger organizations an advantage because they have better access to top-flight expertise. A recent effort on the part of Huawei and other Chinese organizations to build a second “center layer” of the Internet in the form of a “new Internet Protocol” illustrates this problem. The idea behind this “new IP” is to provide support for an increasingly large number of “things” connected to the Internet, such as industrial machinery, doorbells, refrigerators, and millions of sensors. Deploying this “new IP” would bifurcate the Internet into two “Internets.”

Increasing complexity and the centralization of the physical infrastructure also create high barriers to entry. If you want to offer a new service, you must invest enough to build your own infrastructure or rely on existing content providers to host your content and reach users. Relying on a few providers to host all the content on the Internet makes it possible for just a few companies to shut down entire services (like Parler) or control speech.

Individual actions can make a difference, but they must be taken en masse.

 

What Can We Do?

There is no apparent individual government policy, nor any single organization, that can definitively reverse (or even slow down) the centralization of the Internet over the short term. On the other hand, anything that helps break the virtuous cycle (or even slows down the velocity of information through the surveillance/behavioral surplus cycle), whoever takes that action, can help. Individual actions can make a difference, but they must be taken en masse.

Governments and businesses at all levels can, and should, encourage the use of a wider diversity of services connected to the Internet. It’s simple to rely on a small handful of large content providers—they have the resources and connectivity to “do the job right.”

There is no one immediately apparent solution. Still, understanding the problem is a good place to start.