International Internet Engineering Task Force IETF: Centralization, Decentralization and Draft Internet Standards

Summary

Although the Internet was designed and operated as a network within a decentralized network, it is constantly influenced by forces that encourage centralization.

This article provides a definition of centralization, explains why centralization is undesirable, identifies different types of centralization, lists the limitations of common methods of decentralization, and explores what Internet standards work can do to solve this problem.

Status of this memo

This Internet-Draft is submitted in full compliance with BCP 78 and BCP 79.

An Internet-Draft is a working document of the Internet Engineering Task Force (IETF). Note that other groups may also distribute work files as Internet-Draft. A list of current Internet-Drafts is at
https://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents that are valid for up to 6 months and may be updated, replaced or obsolete by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them rather than as “work in progress”.

This Internet-Draft expires on January 10, 2023.

Copyright Notice

Copyright (c) 2022 IETF Trust and persons identified as authors of the document. all rights reserved.

This document is governed by BCP 78 and the IETF Trust Legal Terms (
https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please read these documents carefully as they describe your rights and limitations to this document. Code components extracted from this document must include the text of the Revised BSD License as described in Section 4.e of the Trust Legal Provisions Legal Terms and do not provide the warranties described in the Revised BSD License.

1 Introduction

Much of the success of the Internet is due to its purposeful avoidance of any single controlling entity. This stance stems from the desire to prevent a single technological failure from having a wide-ranging impact [BARAN], which also led to the rapid adoption and wide spread of the Internet. The Internet can meet a variety of needs and is now positioned as a global public good, because joining, deploying applications on the Internet, or using the Internet does not require permission or relinquish control from another entity.

While avoiding the centralization of the Internet remains a broadly shared goal, achieving it consistently has proven difficult. Today, many of the successful protocols and applications on the Internet operate in a centralized fashion – so much so that some proprietary, centralized services have become so well known that they are often mistaken for the Internet itself. Even if the protocol employs technologies designed to prevent centralization, economic and social factors can push users to prefer centralized solutions built with so-called decentralized technologies.

These difficulties raise questions about the role of architectural regulation – particularly that carried out by open standards bodies such as the IETF – in preventing, mitigating, and controlling the centralization of the Internet. This article discusses various aspects of centralization related to internet standards work and argues that while the IETF may not be able to prevent centralization, we can still take meaningful steps to counteract it.

Section 2 defines centralization, explains why centralization is undesirable, and investigates some types of centralization on the Internet. Section 3 explores decentralization and highlights some related techniques, as well as their limitations. Finally, Section 4 considers the role that Internet standards play in avoiding centralization and mitigating its impact.

The primary readers of this document are engineers who design and specify Internet protocols. However, designers of proprietary protocols can benefit from considering all aspects of centralization, especially if they intend to consider their protocols as eventual standardization. Likewise, policy makers can use this document to help identify and correct inappropriate centralized protocols and applications.

2. Centralization

This paper defines “centralization” as the ability of an entity or small group of entities to exclusively observe, capture, control, or extract rents from the operation or use of Internet functions.

Here, an “entity” can be a person, a company, or a government. It does not include organizations that operate in a way that effectively mitigates centralization (for example, see Section 3.1.2).

The definition of “Internet capabilities” is broad. It may be an enabling protocol already defined by a standard, such as IP [RFC791], BGP [RFC4271], TCP [RFC793], or HTTP [HTTP]. It may also be a proposal for a new enabling protocol, or an extension to an existing protocol.

However, the functionality of the Internet is not limited to the protocols defined by the standards. User-visible applications built on standard protocols are also vulnerable to centralization, such as social networking, file sharing, financial services, and news dissemination. Likewise, network equipment, hardware, operating systems, and software are enabling technologies that can exhibit centralization. The provision of Internet connectivity to end users in a particular region or situation is also affected by centralization, as is the supply of transport between networks (so-called “tier 1” networks).

Centralization is not a binary condition; it is a continuum. At one extreme, a function that is absolutely controlled by a single entity (see Section 2.2.1) represents complete centralization; at the other extreme, a function whose value can be realized by any two parties without any possibility of external intervention or influence Sexual functionality represents complete decentralization (sometimes referred to as “distributed” or “peer-to-peer”).

While a few features may occupy both ends of this spectrum, the majority of features lie between the two extremes. Therefore, it is often useful to consider the amount of centralization risk associated with a function, depending on the size, scope and nature of the impact on it. Note that a function may have more than one source of centralization risk, each with its own characteristics.

Centralization risk is strongest when it affects the entire internet. However, it can also be present when a significant portion of the Internet’s users lack choice for a feature. For example, if there is only one feature provider in a region or legal jurisdiction, that feature is effectively centralized for those users.

The risk of centralization is most obviously caused by directly assigning a role to an entity, but there is also a risk of centralization when an entity assumes that role for other reasons. For example, friction over switching to an alternate provider of a function often leads to centralization (see Section 2.2.3). Centralization risk is indicated if switching requires significant time, resources, expertise, coordination, loss of functionality, or effort. Conversely, a function based on a well-defined, open source and designed to minimize switching costs may be considered to have less risk of centralization, even if there are only a few large providers.

This definition of centralization focuses primarily on the relationship of the communicating parties, rather than system design. For example, a cloud service might use decentralization to increase its resiliency, but still be operated by one entity, thus exhibiting the kind of centralization that this article focuses on. Failures due to cut cables, power outages, or server failures are qualitatively different from problems encountered when the core functions of the Internet have gatekeepers.

Therefore, the concept of availability is distinct from centralization, and one cannot assume any relationship between them without a careful analysis of where and how centralization occurs. A centralized system may be more available due to factors such as its available resources, but also has a greater impact when it fails; a decentralized system may be more resilient in the face of partial failures, but has a greater impact on systemic Poor responsiveness to questions.

For example, a large number of websites may rely on a cloud hosting provider or content delivery network; if it is unavailable (whether for technical reasons or otherwise), many people’s internet experience may be disrupted. Likewise, a single mobile Internet access provider could fail, affecting hundreds, thousands or more of its users. In both cases, centralization is not affected by the loss of availability or its scale, but if parties relying on the function have no reasonable option to switch if they are dissatisfied with the availability of the service provided, or object to The friction of switching is too great, and centralization is likely to suffer.

Also, it is important to distinguish centralization from anticompetitive issues (also known as “antitrust”). While there are many interactions between these concepts, and making the Internet more competitive may be an incentive to avoid centralization, only the courts have the power to define the relevant market and determine that the conduct is anticompetitive. Furthermore, integration that the technology community may deem to be less than ideal may not give rise to competition regulation, and conversely, if other mitigation measures are deemed adequate, what may lead to competition regulation may not be of interest to the technology community.

2.1. When centralization is not desirable

There are three main reasons for the centralization of Internet functions.

First, the nature of the Internet is incompatible with centralization. As a “large, heterogeneous collection of interconnected systems” [BCP95], the Internet is often described as a “network of networks”. These networks are linked as peers who agree to facilitate communication, rather than a relationship that is submissive to or coerced by others. This focus on independence of action runs through the way networks are architected, for example, in the concept of “autonomous systems”.

Second, when third parties are inevitably exposed to communications, the information and locational advantages gained allow to observe behavior (“panorama effect”) and to shape or even reject behavior (“choke point effect”) [JUDGE] – these capabilities are the The parties (or the state that has power over them) can use it for coercive purposes [FARRELL], or even to disrupt society itself. Just as good governance of a country requires decentralization [MADISON], good governance of the Internet requires that power cannot be concentrated in one place without proper checks and balances.

Finally, the centralization of a function can have harmful effects on the Internet itself, including:

  • Limit innovation. Centralization may preclude the possibility of “permissionless innovation” – the ability to deploy new, unforeseen applications without coordination with the other parties you are communicating with.
  • Limit competition. The Internet and its users benefit from intense competition when many providers offer applications and services, especially when those users can build their own applications and services based on interoperable standards. When a centralized service or platform must be used because there are no suitable alternatives, it effectively becomes an infrastructure, which encourages abuse of power.
  • Reduce availability. The availability of the Internet (and the applications and services built on it) increases when there are many ways to gain access. While the availability of centralized services can benefit from the focused attention they require, the failure of a large centralized provider can have a disproportionate impact on availability.
  • Create a monoculture. The scale of a centralized service or application can magnify small flaws in functionality to a point where they can have wide-ranging impacts. For example, a single router codebase increases the impact of a bug or vulnerability; a single content recommendation algorithm can have serious social impact. System-wise, the diversity in the implementation of these functions leads to stronger results. [ALIGIA]
  • Self-reinforcing. As widely noted (see, for example, [VESTAGER]), access to data by a centralized service gives it the opportunity to make improvements to its product, while denying this access to others.

See also [KENDE] for a further discussion of how centralization affects the Internet.

As discussed in Section 2.2.2 below, not all centralization is undesirable or avoidable. [SCHNEIDER] points out that “centralized structures can have advantages such as enabling the public to focus limited attention to oversight, or to form a power bloc that can challenge less accountable blocs that may arise. In recent centuries, winning Widely respected centralized structures—including governments, corporations, and nonprofits—are so in large part because of the intentional design of those structures.”

So, the risk of centralization on the Internet is most concerning when it is not widely considered necessary, it has no checks and balances or other accountability mechanisms, it chooses “darlings” that are difficult (or impossible) to replace, and that it has the destructive effects or potential effects described above.

2.2. Types of Centralization

Centralization on the Internet is not uniform, it presents itself in different ways, depending on its relationship to the function in question and the underlying reasons. The following subsections describe different aspects of Internet centralization.

2.2.1. Proprietary Centralization

Creating a protocol or application with a fixed role for a specific party is the most straightforward form of centralization. Currently, many messaging, videoconferencing, chatting, social networking and similar applications operate in this way.

Because they allow control by a single entity, proprietary protocols are often seen as simpler in design, easier to evolve, and easier to meet user needs than decentralized alternatives [MOXIE]. However, they also have a corresponding risk of centralization – if the function has no alternative providers, or it is too difficult to switch to one, its users will be “locked in”.

Proprietary protocols and applications are not considered part of the Internet itself; instead, they are more appropriately described as being built on top of the Internet. They are not regulated by the Internet architecture and related standards, except for the restrictions imposed by the underlying protocols such as TCP, IP, HTTP.

2.2.2. Beneficial centralization

The goals of some protocols and applications require the introduction of centralized functionality. In doing so, they explicitly rely on centralization to provide a specific benefit.

For example, functions that require a single, globally coordinated “source of truth” are inherently centralized, such as in the Domain Name System (DNS), which allows the translation of human-friendly naming into the web in a globally consistent manner address.

Another feature that exhibits beneficial centralization is IP address allocation. Internet routing requires unique assignment of addresses, but if one government or company masters addressing, the entire Internet is at risk of being abused by that entity. Also, due to the role of certificate authorities in communications between clients and servers, the need for coordination in network trust models entails the risk of centralization.

Protocols that need to solve the “reunion problem” to coordinate communications between two parties that are not directly connected are also affected by this centralization. For example, chat protocols need to coordinate communication between two parties wishing to talk; while the actual communication can take place directly between them (as long as the protocol facilitates this), mutual discovery of endpoints usually requires a third party at some point Participation. From the point of view of these two users, there is a risk of centralization of the rendezvous function.

Likewise, when a function requires governance to achieve common goals and protect the interests of the few, the chosen governance mechanism naturally creates a “choke point” that increases the risk of centralization. For example, defining and applying content control policies has centralization risks.

Deciding what is beneficial is a judgment call. Some protocols cannot function without a centralized function; if a function is centralized, other protocols may be significantly enhanced for certain use cases, or just more efficient. This judgment should be made based on established architectural principles and the benefit to the end user.

When favorable centralization emerges, Internet protocols typically attempt to mitigate the associated risks using measures such as federation (see Section 3.1.1) and multi-stakeholder governance (see Section 3.1.2). Protocols that successfully mitigate beneficial centralization are often reused to avoid the enormous cost and risk of re-implementing these mitigations. For example, if a protocol requires a coordinated, global naming function, it is often better to reuse the domain name system than to build a new system.

2.2.3. Centralization of Centralization

Even if a function avoids proprietary centralization and mitigates any beneficial centralization that exists, it may become centralized in practice when external factors affect its deployment, leaving few or even only one entity providing the Function. This is often referred to as “centralization”. While the protocol itself has no such requirement, the economic, legal, and social factors that encourage the use of central functions can lead to centralization.

Often, the factors driving centralization have to do with network effects that are often seen on the internet. While in theory every node on the internet is equal, in practice some nodes are much more connected than others: for example, only a few websites drive most of the traffic on the web. Although seen in many types of networks, network effects grant asymmetric power to nodes that act as communication intermediaries. [BARABASI]

For example, social networking is an application currently provided by a few proprietary platforms, despite standardization efforts (see, for example, [ACTIVITYSTREAMS]), because of the associated strong network effects. While there is some competition in social networks, due to the coordination required to move to a new service, a group of people wishing to communicate is often locked in by the choices of their peers.

See [ISOC] for an in-depth discussion of centralization.

Centralization is hard to avoid in protocol design, and federation protocols are particularly susceptible to centralization (see Section 3.1.1).

2.2.4. Centralization of inheritance

Most Internet protocols and applications rely on other “low-level” protocols and their implementations. The characteristics, deployment, and operation of these dependencies can surface centralization as functions and applications built “on top” of it.

For example, a network between endpoints can introduce the risk of centralization for an application-layer protocol that has power over it because it is necessary for communication. A network may, for financial, political, operational, or criminal reasons, block access to various application protocols or specific services, slow down, or change its content, creating pressure to use other services, which may cause its central change.

Likewise, having a single implementation of a protocol is an inherited centralization risk, as applications using it are vulnerable to its control over their operation. Even if it is open source, if there are some factors that make the fork difficult (for example, the cost of maintaining that fork), there may be inherited centralization.

Inherited centralization occurs when network effects limit choice, but can also be caused by legal mandates and incentives that limit the choice, availability, or availability of a function (such as Internet access) Realization range.

Certain types of inheritance centralization can be prevented by enforcing layer boundaries using techniques such as encryption. When the number of parties that have access to the content of the communication is limited, lower-tier parties can be prevented from interfering with and observing it. While these lower-level parties may still block communications, encryption also makes it harder to tell the target from other traffic.

Note that the prohibitive effect of encryption on inherited centralization is most pronounced when most, if not all, communications are encrypted. See also [RFC7258].

2.2.5. Platform Centralization

The complement to inherited centralization is platform centralization – a function that does not directly define a central role, but can facilitate centralization in the applications it supports.

For example, HTTP is not considered a centralized protocol; interoperable servers are easy to instantiate, and there are multiple clients. It can be used without central coordination, other than the coordination provided by DNS discussed above.

However, applications built on HTTP (and other parts of “web platforms”) often exhibit centralization (eg, social networks). As such, HTTP is an example of a centralized platform – while the protocol itself is not centralized, it facilitates the creation of centralized services and applications.

Like centralization, platform centralization is difficult to prevent with protocol design. Due to the layered nature of the Internet, most protocols allow considerable flexibility in how they are used, often in a way that makes them attractive by relying on one party’s operations.

International Internet Engineering Task Force IETF: Centralization, Decentralization and Draft Internet Standards

3. Decentralization

While the term “decentralization” has a long history of use in economics, politics, religion and international development, in [BARAN] Barans gives one of the earliest definitions related to computer networks, namely when “not A situation when there is always a need to fully rely on a single point”.

This seemingly straightforward technical definition hides several problems.

First, it can be difficult to determine what aspects of a function need to be decentralized and how, because there are often many ways a function can be centralized, and centralization sometimes occurs only after the function is deployed at scale show up.

For example, a cloud storage function can be implemented using a distributed consensus protocol, ensuring that the failure of any one node does not affect the operation or availability of the system. In this sense, it is decentralized. However, if it were run by a single legal entity, that would present very different risks of centralization, especially if there were few other options, or there was friction against choosing them.

Another example is the internet, which was envisioned in its early days and widely regarded as a decentralized force. The inherent platform centralization of large sites only becomes apparent when they successfully exploit network effects to dominate social networks, marketplaces, and similar functions.

Second, different people may have well-intentioned disagreements about what “sufficient decentralization” means, based on their beliefs, perceptions, and goals. Just as centralization is a continuum, so is decentralization, and not everyone agrees on what is the “right” level or type, how to weigh different forms of centralization, or how to weigh centralization against other architectural goals (such as security or privacy).

This is seen in the DNS, which is a single, global “source of truth” with inherent (if beneficial) centralization. The associated risks are mitigated by ICANN’s multi-stakeholder governance (see Section 3.1.2). While many argue that this arrangement is adequate and may even have desirable qualities (such as the ability to impose community standards on the operation of namespaces), others argue that ICANN’s oversight of the DNS is illegal, and they favor distributed-based Decentralization of consensus protocols, not multi-stakeholderism. [MUSIANI]

Third, decentralization inevitably involves the adjustment of power relations among protocol participants, especially when the function of decentralization opens up the possibility of centralization elsewhere. As Schneider points out in [SCHNEIDER], decentralization “appears to operate as a rhetorical strategy that directs attention to some aspects of the proposed social order and not others”, so “we cannot accept technology as an alternative to serious consideration of society, culture and politics”. Or, as put more bluntly in [BODO], “Without governance mechanisms, nodes can collude, people can lie to each other, markets can be manipulated, and people can enter and exit the market at significant costs”.

For example, while blockchain-based cryptocurrencies may technically solve the problems of centralization inherent in traditional currencies, the concentration of power that many people exhibit in terms of voting/mining rights, distribution of funds, and diversity of codebases has led some to Question how decentralized they actually are. [AREWEDECENTRALIZEDYET] The lack of formal structures opens up opportunities for potential, informal power structures that come with their own risks, including centralization. [FREEMAN]

In practice, this means that decentralization requires a lot of work, is political in nature, and has a high degree of uncertainty about the outcome. In particular, if one sees decentralization as a larger societal goal (in the spirit of how the term is used in other non-computer contexts), merely rearranging technical functions can lead to frustration. “A distributed network does not automatically produce an equal, fair or just social, economic, political landscape”. [BODO]

3.1. Decentralized Technology

In the context of Internet standardization, the implementation of decentralization is a two-step process: assessing the nature of the risks of centralization, and then applying technology to reduce or mitigate the risks. The following subsections examine some of these techniques.

Choosing an appropriate decentralization technology requires balancing the specific goals of the function with the risks of centralization, as it is rarely possible to completely exclude all forms of centralization through technological means. If executed properly, decentralization may produce an outcome that still has the risk of centralization, but this risk should be understood, accepted, and mitigated where possible and appropriate.

It is worth noting that decentralization does not require that the provided functionality needs to be distributed in a particular way or to a particular degree. For example, the Domain Name System [RFC1035] is widely considered to have an acceptable risk of centralization, even though it is provided by a limited set of entities.

3.1.1. Alliances

A well-known technology for managing centralization in Internet protocols is federation—designing them so that new instances of any centralized function are easy to create and maintain interoperability and connectivity with other instances.

For example, SMTP [RFC5321], the basis of the email protocol suite, has two functions that have centralization risks:

  1. give each user a globally unique address, and
  2. Route information to users even if they change network locations or disconnect for extended periods of time.

Email reuses DNS to help mitigate the first risk. To mitigate the second risk, it defines a specific role to route users’ information, the Information Transfer Agent (MTA). By allowing anyone to deploy an MTA and define rules for interconnecting them, users of the protocol avoid the requirement of a single central router.

Users can (and often do) choose to delegate this role to someone else, or run their own MTA. However, running your own mail server has become difficult because a small MTA has the potential to be classified as a spam source. Because large MTA operators are known to be more affected if their operations are affected, they are less likely to fall into this category, centralizing the operation of the protocol (see Section 2.2.3).

Another example of a federated Internet protocol is XMPP [RFC6120], which supports “Instant Messaging” and similar functions. Like email, it reuses DNS for naming and requires federation to facilitate the rendezvous of users from different systems.

While some deployments of XMPP do support true federated messaging (ie, a person using service A can chat interoperably with someone using service B), many of the largest deployments do not. Because federation is voluntary, some operators trap their subscribers into a single service, robbing them of the benefits of global interoperability.

The above example illustrates that while federation can be a useful technique to avoid proprietary centralization and manage beneficial centralization, it does not prevent centralization or platform centralization. If a single entity can capture the value provided by a protocol, they may use the protocol as a platform to achieve a “winner-take-all” outcome, a significant risk for many Internet protocols as network effects tend to facilitate this result. Likewise, external factors, such as spam control, may naturally tilt the “table” toward a small number of operators.

3.1.2. Multi-stakeholder governance

Protocol designers can mitigate the risks associated with an advantageous centralized function by delegating the management of this function to a multi-stakeholder body (see Section 2.2.2) – this body includes different types of parties affected by the operation of the system On behalf of (“stakeholders”), attempt to make reasonable, lawful and authoritative decisions.

The most widely studied example of this technology is the management of the DNS, which acts as a “single source of truth” and exhibits beneficial centralization in its naming function as well as the operation of the overall system. To reduce operational centralization, multiple root servers are implemented by independent operators (who themselves have different geographic regions) and corporate entities, non-profit organizations, and government agencies of choice from many jurisdictions and affiliates. task. The namespace itself is governed by the Internet Corporation for Assigned Names and Numbers (ICANN), a global multi-stakeholder organization with representatives from end users, governments, operators and others.

Another example is the management of network trust models, implemented by the web browser as the relying party and the certificate authority as the trust anchor. To ensure that all parties meet the operational and security requirements to provide the desired attributes, the CA/Browser Forum was established as an oversight body involving stakeholders on both sides.

Another multi-stakeholder example is the standardization of the Internet Protocol itself. Since the specification controls the behavior of implementation, the standardization process can be viewed as a single point of control. Therefore, Internet standards bodies like the IETF allow for open participation and contribution, decisions are made in an open and responsible manner, there is a clear process for making (appealing if necessary) decisions, taking into account the views of different stakeholder groups [ RFC8890].

A major disadvantage of this approach is that establishing and sustaining a multi-stakeholder institution is not an easy task. Furthermore, their legitimacy cannot be assumed and may be difficult to establish and maintain (see, for example, [PALLADINO]). This concern is especially important if the functions being coordinated are extensive, complex and/or controversial.

3.1.3. Distributed Consensus

More and more distributed consensus technologies such as blockchain are being touted as solutions to the problem of centralization. A complete survey of this rapidly changing field is beyond the scope of this article, but we can generalize about its properties.

These techniques attempt to avoid the risk of centralization by distributing functions to members of the sometimes large protocol participants. They usually use cryptography (usually an append-only transaction ledger) to guarantee proper performance of a function. A specific task assigned to a node for processing is usually not predictable or controllable.

Sybil attacks (where one party or coordinating parties cheaply create enough protocol participants to influence the judgment of consensus) is a major problem with these protocols. They use indirect techniques to encourage diversity in the pool of participants, such as proof-of-work (each participant must show significant resource consumption) or proof-of-stake (each participant has some other incentive to perform correctly).

Using these techniques can create barriers to proprietary and inherited centralization. However, centralization and platform centralization are still possible depending on the application in question.

In addition, distributed consensus techniques have several potential drawbacks that may make them unsuitable—or at least difficult to use, for many Internet applications because their use conflicts with other important goals:

  1. Distributed consensus has significant implications for privacy. Since user activities such as queries or transactions are shared with many unknown parties (and often publicly visible due to the nature of blockchains), their privacy properties are very different from traditional client/server protocols. Potential mitigations (eg, private information retrieval; see, eg, [OLUMOFIN]) are still not suitable for widespread deployment).
  2. Their sophistication and “chatness” often results in significantly less efficient use of the network (often, orders of magnitude). When distributed consensus protocols use proof-of-work, the energy consumption can become prohibitive (so much so that some jurisdictions have banned it).
  3. Distributed consensus protocols have yet to be proven to scale to the extent desired by successful Internet protocols. In particular, relying on unknown third parties to provide functionality can introduce significant changes in latency, availability, and throughput. This is a noticeable change for applications that have high expectations for these features (eg, consumer-facing websites).
  4. By design, distributed consensus protocols spread the responsibility of a function among several hard-to-identify parties. While this can be an effective way to prevent some types of centralization, it also means that holding someone accountable for how the function is performed is difficult and often impossible. While protocols may use cryptographic techniques to guarantee correct operation, they may not capture all requirements and may not be used correctly by the protocol designer.
  5. Distributed consensus protocols often rely on cryptography to determine identity, rather than trusting third-party assertions of identity. When a participant loses their keys, the process of recovering their identity exposes additional centralization risks.

It is also important to realize that a protocol or application can use distributed consensus on some functions but still risk centralization elsewhere – either because those functions cannot be decentralized (the most common are rendezvous and global naming; see Section 2.2.2), or because the designers chose not to do so because of the associated costs and lost opportunities.

Even if distributed consensus is used for all technical functions of the service, some coordination is still necessary—whether through the governance of the functions themselves, creating shared implementations, or the documentation of shared threading protocols. This represents the risk of centralization, just at different layers (inherited or platform-based).

These potential drawbacks do not preclude the use of distributed consensus techniques in every case. However, they do remind us not to rely uncritically on these technologies to avoid centralization.

4. What should Internet standards do?

Centralization is driven by powerful forces (both economic and social) and the network effects that come with the scale of the Internet. Since permissionless innovation is a core value of the Internet, and much of the centralization seen on the Internet is done by proprietary platforms that exploit this nature, the controls available for standard work are very limited.

While standards bodies alone cannot prevent centralization, the following subsections suggest meaningful steps that can be taken. Standards work also makes valuable contributions to other related forms of regulation.

4.1. Reality

Some centralization risks are easily manageable in standard work. For example, if a proprietary protocol is proposed to the IETF, it will be rejected immediately. There is a growing body of knowledge and experience in managing beneficial centralization risks, and there is a strong preference to reuse existing infrastructure where possible. As mentioned above, encryption is often a way of managing legacy centralization and has become the norm for standard protocols. These responses are an appropriate way for Internet standards to manage the risk of centralization.

In standard work, however, mitigating centralization and platform centralization is much more difficult. Because we don’t have a “protocol police”, it’s impossible to ask someone to stop using a so-called federation protocol to build a proprietary service. Nor can we prevent someone from building centralized services “on top” of standard protocols without giving up architectural goals such as permissionless innovation. While the imprint of Internet standards is not without value, simply withholding the imprint will not prevent these centralized practices.

Therefore, the potential centralization risks of devoting significant resources to scrutinizing protocols—especially centralization and platform risks—are unlikely to be effective in preventing Internet centralization. Almost all existing internet protocols – including IP, TCP, HTTP and DNS – exhibit centralization or platform centralization. Refusing to standardize on a newer protocol because it faces similar risks would be unfair, proportionate, or efficient.

When we identify a centralization risk, we should consider how it relates to other architectural goals when considering how to address it. In particular, attention should be paid to the effectiveness of standards (as a form of architectural governance) in achieving each goal.

For example, ex-ante technical constraints tend to be more effective in guaranteeing privacy than ex-post legal supervision. Instead, (as discussed) certain types of centralization may be better addressed through legal regulation. As a first-order concern, therefore, standard efforts to balance these concerns may focus primarily on privacy. However, these are often not completely separable goals – centralization can lead to one or a few entities owning a greater amount and variety of data, open only to them, raising significant privacy and security concerns.

4.2. Decentralized Proprietary Functions

It is worthwhile to create specifications for those functions that are currently only satisfied by proprietary vendors. By building open specifications on top of established standards, alternatives to centralized functions can be created.

A common objection to such efforts is that adoption is voluntary, not mandatory; there is no “standard police” to enforce their use or enforce proper implementation. For example, a specification like [ACTIVITYSTREAMS]) has been around for a while, but has not been widely adopted by social network vendors.

However, while standards are not mandatory, legal regulation is, and regulators around the world are now focusing their efforts on the Internet. In particular, legal mandates for interoperability are increasingly being discussed as a remedy for competition problems (see, for example, [OECD]).

Thus, an appetite for Internet regulation not only poses a risk to the Internet; it also poses an opportunity for new norms to decentralize these functions, backed by legal mandates, combined with changing norms and the resulting market forces [LESSIG].

Successfully creating standards that are consistent with legal requirements is a new area for the IETF, presents many potential pitfalls, and requires new capabilities (especially liaison, possibly originating from the IAB) and considerable effort. If the Internet community does not make this effort, regulators are likely to turn to interoperability specifications from other sources—most likely with less transparency, less investment, limited experience, and no reference to the architectural goals of the Internet.

4.3. Evaluating new decentralized technologies

The decentralized technologies listed in Section 3.1 are not a closed set; broad interest stimulates the development of new approaches, both general and solutions to specific problems.

For example, secure multi-party computation techniques (see, eg, [YAO]) can be composed to allow parties to perform computations without revealing private input. Protocols like [ENPA] and [PRIO] use them to limit the information available to participants in the protocol in order to achieve privacy goals; doing so may also counteract some types of centralization, as discussed in Section 4.5. In other cases, however, these techniques do not automatically preclude all centralization; such systems often still require trust, even if it is limited. This could lead to other forms of centralization.

Whether the use of these technologies (or others) will meaningfully resist centralization remains uncertain. Standards bodies (including the IETF) can play an important role by incubating them, applying (where necessary developing) architectural guidelines for privacy, security, operability and other goals, and ensuring interoperability. When appropriate, publication on the standards track or as an experimental publication can signal its applicability to implementers, users and regulators.

4.4. Building a robust ecosystem

To minimize the centralization risk of inheritance, standard-defined functions should have a clear goal of broad and diverse implementation and deployment, so that users have as many choices as possible.

RFC5218 Section 2.1 discusses some of the factors in protocol design that encourage this outcome.

This goal can be further achieved by ensuring that the cost of switching to a different implementation or deployment is as low as possible to facilitate subsequent substitutions. This means that the standard is functionally complete and specified precisely enough to lead to meaningful interoperability.

The goals of integrity and diversity are sometimes contradictory. If a standard is very complex, it may hinder the diversity of implementations because the cost of a full implementation is too high (consider: web browsers). On the other hand, if the specification is too simple, it may not provide enough functionality to do it, and the resulting proprietary extensions may make conversion difficult (see Section 4.6).

Also worth noting is the underlying motivation for implementation. While fully commoditized protocols may not allow implementers to differentiate themselves, they provide opportunities for specialization and improvement elsewhere in the value chain [CHRISTENSEN]. Well-timed standards work can leverage these forces to focus proprietary interests on open technologies, rather than as their replacements.

Balancing these factors to create a strong ecosystem is difficult, but is often aided by community building and good design – especially with the appropriate use of layering. It also requires ongoing maintenance and evolution of the protocols to ensure they remain relevant and suitable for their use.

4.5. Decentralization of control

Some functions may see great benefit if performed by third parties in the communication. If used well, adding a new party to the communication can improve:

  • efficiency. Many functions on the Internet are more efficient when performed at higher scales. For example, a content delivery network can provide services at financial and environmental costs that are a fraction of what those providing the content would pay themselves because of the scale of their operations. Likewise, a two-sided marketplace platform can introduce considerable efficiencies [SPULBER] over a one-to-one buyer/seller transaction.
  • Simplicity. Fully disintermediated communications can shift the burden of functionality to the endpoint. This can lead to increased cognitive load on users; for example, compare commercial social networking platforms and self-hosted efforts.
  • specialization. Concentrating a function in the hands of a few people can improve results because specialization results from it. For example, services overseen by professional administrators are generally considered to have a better security posture and better usability.
  • privacy. For some features, user privacy can be improved by centralizing their activities and preventing individual actions from being differentiated from each other. [CHAUM] The introduction of third parties can also enforce functional boundaries, e.g., reducing user trust in potentially malicious endpoints, as seen in so-called “forgetting” protocols (e.g., [RFC9230]), allowing end-users to Hide their identities while still accessing them.

However, the introduction of a new party to the communication increases the risk of centralization and platform centralization for the Internet Protocol, as it brings opportunities for control and observation. While (as mentioned above) standards work is very limited in its ability to prevent or control these types of centralization, thoughtful restrictions on third-party functionality when designing the protocol can at least prevent the worst outcomes.

Most commonly, a third party is added to the agreement as an “intermediary” or designated “proxy” role. In general, they should only be involved because of active action by at least one endpoint, and their ability to observe or control communications should be limited to what is required to perform their intended function.

For example, early deployments of HTTP allowed networks to plug in intermediaries without knowledge of endpoints that could see and by default alter the entire content of communications—even if they were just to perform basic functions like caching. Due to the introduction of the HTTPS and CONNECT methods (see [HTTP] section 9.3.6), coupled with efforts to encourage adoption of this method, it is now required that these intermediaries must be explicitly intervened by an endpoint.

See [ID.thomson-tmi] for more guidance on protocol mediation.

The use of the term “intermediary” (often in legal and regulatory contexts) is also broader than in protocol design; for example, an auction site that mediates between buyers and sellers is considered an intermediary, although it is not officially in HTTP intermediary (see [HTTP] section 3.7). Rather than limiting the capabilities of the underlying protocol, protocol designers can address the centralization risks associated with such intermediaries by standardizing functions; see Section 4.2.

4.6. Target Scalability

An important feature of Internet protocols is their ability to evolve, so they can meet new requirements and adapt to new conditions without requiring “flag days” to upgrade implementations. Typically, protocols adapt to evolution by extending mechanisms that allow optional functionality to be added in an interoperable manner over time.

Scalability can also be seen as a decentralization mechanism – by allowing uncoordinated evolution, it promotes autonomy and the ability to adapt to local needs. However, if a powerful entity can change the goal of meaningful interoperability by adding proprietary extensions to standard protocols, protocol extensions also increase the risk of platform centralization. This is especially true when the core standard alone does not provide sufficient utility.

For example, the extreme flexibility of the SOAP protocol and its failure to provide significant independent value enables vendors to require the use of extensions they prefer, favoring those with more market power.

As such, standards work should focus on providing concrete utility to the majority of its published users, rather than being a “framework” that does not immediately provide interoperability. Internet protocols should not make every aspect of their operation extensible; extension points should be reasonable, appropriate boundaries of flexibility and control. When a protocol defines extension points, they should not allow an extension point to declare itself mandatory for interoperability, as this pattern invites abuse.

Where extensions are allowed, attention should be paid to those that appear; where appropriate, widely adopted extensions should go through a standard process to ensure that their results conform to architectural principles and common goals (see also Section 4.2).

5. Security Considerations

This document has no direct security implications for Internet protocols. However, if the risks of centralization are not considered, countless security issues can arise.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/international-internet-engineering-task-force-ietf-centralization-decentralization-and-draft-internet-standards/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-07-12 10:05
Next 2022-07-12 10:09

Related articles