One of the most valuable properties of many blockchain applications is trustlessness: the ability for the application to continue to operate in an expected way without relying on specific actors to operate in a specific way.
Even if their interests may change and push them to behave in some different and unexpected ways in the future.
Blockchain applications are never completely trustless, but some are closer to trustless than others. If we want to make trust minimization a reality, we need the ability to compare different levels of trust.
First, my simple one-sentence definition of trust: trust is the use of any assumptions about the behavior of others.
When we run a piece of code written by someone else, we believe they honestly wrote the code, or at least enough people have checked the code.
To analyze blockchain protocols, I tend to divide trust into four dimensions:
- How many people do we need to act according to our expectations?
- How many of them?
- What motives do these people need for their actions? Do they need altruism, or are they just profit-seeking?
- How bad would the system be if these assumptions were violated?
Now, let’s focus on the first two. We can draw a graph:
The greener the better. Let’s explore these categories in more detail:
1-of-1: There is only one actor, and the system will work if (and only if) one actor does what we expect. This is the traditional “centralized” model and what we strive to do better.
N-of-N: A “dystopian” world. We rely on a whole bunch of actors, all of which need to work as expected in order for everything to work, and if any of them fail, there is no bottom line.
N/2-of-N: This is how blockchains work – if the majority of miners (or PoS validators) are honest, they will work. Note that the value of N/2-of-N becomes larger as N increases; a blockchain with a small number of miners/validators controlling its network is more likely than a blockchain with widely distributed miners/validators Much more fun. That said, if we want to improve this level of security, we need to worry about surviving a 51% attack.
1-of-N: There are many actors, and as long as at least one of them behaves according to our expectations, the system will function properly. Any fraud proof-based system falls into this category, as do trusted settings, although in this case N is usually smaller. Note that we do want N to be as large as possible!
Minority in N: There are many participants, and as long as at least a small number of them do what we expect, the system will work fine. Data availability checks fall into this category.
0-of-N: The system works as expected and does not rely on any external actors. Validating a block by checking it yourself falls into this category.
The above categories are very different from each other.
Believing that a particular person (or organization) will work as expected is very different from believing that someone somewhere is going to do what we expect.
1-of-N is arguably closer to 0-of-N than to N/2-of-N or 1-of-1. The 1-of-N model might be a lot like the 1-of-1 model, but These two are very different: in a 1-of-N system, if our participant suddenly disappears or becomes evil at the moment, we can switch to the other, while in a 1-of-1 system it’s over.
In particular, it is important to ensure that there are no bugs in the code that can be found by others, even if the correctness of the running software depends on the “few of N” trust model.
Another important distinction is: how will the system fail if our trust assumptions are violated? In blockchain, the two most common types of failures are liveness failures and security failures.
Liveness failure is when we are temporarily unable to do what we want to do. For example, withdraw coins, get transactions included in blocks, read information from the blockchain.
A security failure is an event that the system intended to prevent but actively occurred. For example, invalid blocks are included in the blockchain.
Below are several examples of trust models for blockchain L2 protocols. I use “small N” to denote the set of participants of the layer 2 system itself, and “big N” to denote the participants of the blockchain; it is generally assumed that the community of the layer 2 protocol is smaller than the blockchain itself.
I also limit the use of the term “liveness failure” to situations where coins are stuck for an extended period of time; the system is no longer able to be used, but being able to withdraw coins almost instantly is not a liveness failure.
- Channels (including state channels, Lightning Network): 1-of-1 trust model in terms of liveness (our adversaries can temporarily freeze our funds, but this damage can be mitigated if we split coins among multiple adversaries) , the security aspect is the N/2-of-big N trust model (51% attack of the blockchain can steal our coins).
- Plasma (assuming a centralized operator): The liveness aspect is a 1-of-1 trust model (the operator can temporarily freeze our funds), and the security aspect is an N/2-of-big N trust model (blockchain 51% attack).
- Plasma (assuming a semi-decentralized operator, such as DPOS): The activity aspect is N/2-of-small-N trust model, N/2-of-big-N trust model.
- Optimistic rollup: The liveness aspect is 1-of-1 or N/2-of-small-N trust model (depending on operator type), N/2-of-big-N trust model.
- ZK rollup: The liveness aspect is a 1-of-small-N trust model (if the operator does not include our transaction, we can also withdraw, if the operator does not immediately include our withdrawal, they cannot produce more batches, we can With the help of any full node of the rollup system, it can be withdrawn by itself); there is no risk of security failure.
- ZK rollup (with light-withdrawal enhancement function): no risk of active failure, no risk of security failure.
Finally, there is the question of incentives: the actors we trust need to be very altruistic in order to behave as expected, but slightly altruistic, or rational enough?
If we add a means of micropayments, it would make sense to help others withdraw from the ZK rollup, so there’s really no reason to worry that we won’t be able to get out of the rollup in any significant use.
At the same time, if we as a community agree not to accept 51% attack chains (those that have historically taken a long time to recover, or have a long time to censor blocks), then the risk to other systems can be mitigated.
When someone says a system “depends on trust”, you need to ask them in more detail what they mean! Do you mean 1-of-1, or 1-of-N, or N/2-of-N? They ask these to participate Is it altruistic or rational? If it is altruism, is it a small cost or a huge cost?
If this assumption is violated – will we just have to wait a few hours or days, or will our assets be stuck forever? Depending on the answer, the answer to whether we want to use the system ourselves can be very different.
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/vitaliks-thoughts-on-the-trust-model/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.