Trust is complex, and as I mentioned briefly in my article on the prophecy machine (oracle) problem, trust can be largely divided into
Economic – Setting incentives so that any economically rational actor will take the desired action.
Moral/normative – sets incentives and social norms that enable actors to perform certain actions because they are moral or based on desired norms from which the actor would not normally wish to deviate.
Reputation-based – Setting incentives that make actors more willing to maintain existing “trust” relationships and reputations as part of a larger incentive system.
The motivation behind blockchain design, which relies more on cryptographic economic guarantees than on ethical or reputation-based guarantees, is probably
Less systemic risk
More certainty about the level of risk
when dealing with a purely crypto-economic world.
For example, if a store chooses not to place a security guard at the door, it is assumed that the vast majority of people are ethical enough to believe that they would not want to rob the store. Or, even if they do place a security guard, they are willing to assume that the number of people performing armed robberies will be small. When banks make loans without full collateral, they assume that all loans will not default at the same time. When Aave makes a loan to you with a 75% collateral factor, it assumes that the price of that collateral will not fall by more than 25% between prognostic machine updates. When courts put a cap on the number of cases they are willing to hear in a given time period, they assume they will not be subject to DDOS attacks by frivolous lawsuits that slow down the judicial process and prevent justice from being done.
All systems in the real world bear some degree of systemic risk, meaning that an unlikely event could bring down large systems. Sometimes such things do happen; for example, the 2008 financial crisis was due to a systematic overestimation of the liquidity of mortgage-backed securities. Civil wars, on the other hand, are caused by a small (but important) group of individuals disrupting existing systems that are not designed with a mechanism to withstand such disruptions.
Complete protection against tail risk is impossible, no third party can offset it, and tail risk can usually only be transferred from one party to another. Eliminating it involves absurd capital inefficiencies, at least in a market environment. Even in a non-market environment, not taking any systemic risk slows the system down considerably. For example, if the judiciary set a strict cap on the number of cases they could hear and auctioned off the rights to those time slots, all sorts of contracts and trust relationships simply couldn’t be established because they wouldn’t have a judiciary to arbitrate for them (which, by the way, is exactly what blockchain does by specifying block size limits).
Certainty over risk
Systemic risk is often difficult to quantify. For example, a nationwide earthquake is an uninsurable systemic risk that can only be roughly estimated based on historical data. So is the chance of an armed robbery at a store, which can only be estimated very imprecisely using historical data. The amount of liquidity of assets in the open market is also not entirely measurable. Sure, you can measure current liquidity, but there is no guarantee that this liquidity will be there when you actually need it, nor is there an easy way to estimate its likelihood (of course, a full science has evolved to measure this risk)
Most activities in the real world are conducted through relationships of trust. For example, companies determine that Google will not delete their data in order to make their stock price plummet (this is analogous to a data unavailability attack), and then, for example, customers believe that their doctor or lawyer will not betray them for financial gain.
Part of the formation of this relationship involves an iterative game in which both parties behave in a trustworthy manner. In the minds of the participants who choose to trust, each such action contributes to a change in Bayesian prior probabilities (meaning probabilities obtained from previous experience and analysis). Over time, this relationship grows closer. Because of the time and effort required to construct such relationships, people prefer to maintain them for future situations rather than betray them (even if those futures are currently unknown).
Nicky Case’s The Evolution of Trust is a game that explains this trust and a priori evolution very well.
Who is the trusted entity?
At a fundamental level, trust is trust between individuals (not a blob of capital as some blockchain analysis would have you believe). Beyond that, trust is built between systems – each consisting of its own trust relationship. For example, when users trust Google not to make their data unavailable, they trust both the complex set of relationships within Google’s corporate structure and Google’s shared relationships with other businesses, the judiciary, and broader society that make such attacks difficult. Each of these other business and judicial systems is trustworthy because of the complex trustworthy relationships within them.
The social layer in ethereum
A system that relies purely on crypto-economic incentives to build trust is inferior to a system that allows for trusting relationships. Limiting the toolkit used to build relationships and move toward common goals can limit the rate at which people can move toward those goals.
The challenge with systemic risk in these trusting relationships is when that systemic risk permeates the underlying layer itself. In the process of innovating and enabling new forms of activity not possible with Bitcoin, Ether introduces significant systemic risk. Whether the spread of this systemic risk is an inevitable consequence of enabling additional types of useful activity on the blockchain is an open question. I strongly believe that it is at least brought about by the necessity of prophecy machines and the trust relationships from which they benefit.
The systemic risk of hard forks
A hard fork coordinated through the community is the last line of defense to defend the blockchain. This social layer inevitably has many systemic risks and inefficiencies. For example, popularity on social media will play an important role in shaping public opinion on “which fork to use” and “which fork to discard”.
The core researchers at Etherpad have an important understanding of this process – users are more likely to simply use the fork chain where all subsequent research & innovation is taking place, as has been evident in the fall of ETC.
Dapp and trust sources will also be affected. Consider a hypothetical where Circle considers one of the two branches of Ether to be the standard chain, which allows USDC redemptions only on this chain and not the other, which would lead to significant volatility and collapse of the other market. Chainlink could then also respond by declaring a canonical fork, then the price of LINK coins on the other forked chain collapses, at which point some participant acquires all LINK at a bargain price and offers false values, effectively exhausting all dapps, such as Aave and Synthetix. the Dapp would then also eventually abandon this fork. In effect, just a few participants determine the standard forked chain for the entire DeFi ecosystem.
As a result, the process may actually be captured by a very small number of entities.
Systemic risk of non-running nodes
A large number of Ether users do not run nodes. They maintain trusted relationships with entities such as Infura (metamask uses Infura by default), the Graph (built into the dapp backend and can be used to build transaction payloads), and etherscan. While Infura does not currently provide incorrect data, this systemic risk still exists.
One might argue that instead of letting everyone run a node, Etherscan gives them the option to run a node without a trusted relationship. However, if everyone establishes the same trust relationship through a small set of entities, this makes it impossible for the entire system to democratically resolve a hard fork in case of an emergency.
Systemic risk due to MEV capture
Flashbots has centralized the network to capture MEV (Miner Extractable Value). All ethereum traffic now flows through a bunch of servers maintained by the Flashbots team. All miners are now forced to expose themselves to some extent and establish connections to these servers —- which in turn could become the focus of coordination between miners (yes, this miner coordination is bad because it is a prerequisite for any type of attack prerequisite for any type of attack.)
The incentives for miners have shifted from the initial incentive based on the price of Gas to bundled payments. The initial incentive for DoS was the crypto economy (paying gas for failed transactions), whereas now they are API rate limits for flashbot servers. This is also a benefit for users, as they do not want to pay gas for failed transactions and prefer centralized servers to sort and discard transactions without payment, rather than paying miners to reach consensus.
Some entities can commit to not extracting MEV and submitting transactions for sorting, especially in layer 2 networks where users can choose the block producer. This could be a central entity, such as Arbitrum and Optimism when they first started, or a fair sort similar to Chainlink. Accepting this sorting commitment is a trust relationship that poses a systemic risk when it fails and shifts back to maximum MEV extraction.
A world that is assumed not to extract MEV may face serious unintended consequences when it degenerates to a world that extracts MEV, thus introducing systemic risk.
Mempools (storage pools) can grow indefinitely when transactions pay 0 gas fees, and access to these mempools (storage pools) and relays may well prove to be further centralized. The same applies to the computational power required to extract the maximum MEV on unbounded Mempools (storage pools). All this centralization is accompanied by systemic and regulatory risks.
Systemic risk due to scalability
The increase in transactions per second inevitably forces centralization because decentralized systems running on desktop computers cannot transfer as much data. All rollups have centralized (albeit potentially rotating) block producers and have higher bandwidth and computational requirements. All ethereum traffic will soon flow through them, and we will soon trust them to maintain uptime without censoring the user’s IP address.
Systemic risk caused by prophecy machines
All prophecy machines, whether MakerDAO, Chainlink or UMA – they have weaker crypto-economic guarantees than the base layer. However, they are gaining a considerable amount of activity and capital today. In cryptoeconomics, they assume that the majority is honest, since a malicious majority may have an incentive to attack the system. Some level of trust outside of cryptoeconomics is always preferred, as entities will disclose themselves and resort to the law if they act in bad faith. Reliance on the legal system simply translates the systemic risk from the unethical behavior of actors to the risk of the legal system failing or capturing systemic regulation.
Systemic risk arising from entitlement delegation
As described in this paper, the Ether 2.0 form of entitlement delegation raises significant systemic risk. The vast majority of ETH holders are not technically and politically motivated enough to run their validators, which would lead them to delegate ETH to known entities with different levels of trust in the relationship.
This centralization would in turn increase the risk of regulatory capture of the entire Ether ecosystem, for example, by state requirements to review or slow down certain theft or illegal transactions. However, regulatory capture could theoretically go well beyond this point, as KYC/AML may eventually be required, first for all pledgers and eventually for all users.
Ether is exposed to a significant amount of systemic risk, which in my opinion is a natural consequence of the fact that it supports many more types of economic activity than BTC.
We may need to pay more attention when it comes to identifying and quantifying such risks. This can also be informed by the general body of economic and game theory research and attempts to formalize how trust relationships develop, what their guarantees are, and the forms of risk they entail.
Blockchain attempts to address the risk uncertainty present in trust relationships by assuming that there are no trust relationships, which will soon prove to be an inappropriate approach. Instead, we must now look further into how trust relationships work and close the gaps in our understanding of them.
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/on-the-six-systemic-risks-of-ether-what-are-the-challenges-to-blockchains-trust-relationships/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.