V God’s 2019 and 2021

In 2014, I published a post and gave a speech, listing some difficult problems in mathematics, computer science, and economics. I think these problems can lead to the field of cryptocurrency (which I called cryptocurrency at the time). Maturity is very important. In the past five years, great changes have taken place. But how much progress has been made on what we thought was important at the time? Where did we succeed, where did we fail, and in what ways did we change our view of important things? In this article, I We will review the 16 issues since 2014 one by one, and look at our progress on each issue today. Finally, I will make my new choices for the 2019 problems.

These problems can be divided into three categories:

  1. Cryptography, so if they are solvable, they need to be solved with purely mathematical techniques.
  2. Consensus theory has improved the proof of work and proof of rights to a large extent.
  3. Economically, a structure must be created that involves giving incentives to different participants, and it often involves the application layer rather than the protocol layer.

We have seen significant progress in all areas, and progress in some areas is greater than others.

password issue

Blockchain scalability

One of the biggest problems facing the cryptocurrency field today is the scalability problem… The main problem of [Super Blockchain] is trust: if only a few entities can run a complete node, then these entities can collude and agree to give themselves a large amount additional bitcoins , if you do not handle their entire block, other users can not saw a block is invalid.

Problem: Create a blockchain design that maintains security similar to Bitcoin, but the maximum size of the most powerful node required for the network to keep running is basically sub-linear in the number of transactions.

Status Quo: Huge theoretical progress, waiting for more realistic assessments.

Scalability is a technical issue, and we have made great progress in theory. Five years ago, almost no one thought about sharding; now, sharding design is very common. In addition to Ethereum 2.0, we also have OmniLedger, LazyLedger, and Zilliqa. It seems that research papers are published every month. In my opinion, further development at this point is gradual. Fundamentally, we already have many technologies that allow the validator group to process more data than a single validator under safe circumstances. At this point, we have reached a consensus and allow customers to indirectly verify the complete validity and availability of the block. Technology, even under the conditions of a 51% attack.

These may be the most important technologies:

There are other smaller developments, such as cross-shard communication through receipts, and “constant factor” enhancements, such as BLS signature aggregation.

In other words, the fully sharded blockchain has not yet appeared in actual operation (partially sharded Zilliqa has recently started running). In terms of theory, it is mainly about the details of the debate and the challenges related to the stability of the sharded network, the experience of developers, and the reduction of centralization risks; the basic technical possibilities seem to be no longer in doubt. But there are still challenges that cannot be solved by thinking; it is enough to develop the system and see Ethereum 2.0 or similar chains running.

Timestamp

Question: Create a distributed incentive compatible system, whether it is superimposed on the blockchain or its own blockchain, it can maintain the high precision of the current time. The clocks of all legitimate users are normally distributed around a certain “real” time with a standard deviation of 20 seconds. No two nodes are more than 20 seconds apart.

The solution allows to rely on the existing “N node” concept; in practice, this will be implemented through proof of equity or non-witch tokens. The system should continue to provide for less than 120 seconds (or less if possible), and it needs to be in the internal clock of >99% honest participating nodes. The external system may ultimately depend on this system; therefore, regardless of motivation, it should remain secure to prevent attackers from controlling less than 25% of nodes.

Status: Some progress has been made.

Ethereum actually survives well with a block time of 13 seconds and no particularly advanced timestamp technology; it uses a simple technique in which the client does not accept claims that the timestamp is earlier than The block of the client’s local time. Nevertheless, this has not been verified in serious attacks. The recent network adjustment timestamp proposal attempts to improve the status quo by allowing the client to determine the time when the client cannot know the current time with high accuracy; this has not been tested.

But in general, timestamps are not currently the frontier of perceived research challenges; perhaps this will once again change the proof-of-stake chain (including Ethereum 2.0 and others) as a real real-time system. We see what the problem is.

Arbitrary proof of calculation

Problem: Problem: Creating programs POC_PROVE(P,I) -> (O,Q) and POC_VERIFY(P,O,Q) -> {0, 1} makes POC_PROVE run program P on input I and return to program output O and a. Calculate to prove Q and POC_VERIFY to obtain P, O and Q, and output whether Q and O are legally generated by the POC_PROVE algorithm using P.

Status Quo: Great progress has been made in theory and practice.

This basically means creating a SNARK (or STARK, or SHARK, or…). We did it! SNARK is now understood by more and more people, and has even been used in multiple blockchains (including tornado.cash on Ethereum). SNARK is very useful, either as a privacy technology (see Zcash and tornado.cash) or as a scalability technology (see ZK Rollup, STARKDEX and STARKing erasure code data root).

There are still challenges in terms of efficiency; creating algorithm-friendly hash functions is a big problem, and it effectively proves that random memory access is another problem. In addition, is there a fundamental limit to the O(n * log(n)) explosion in proof time, or is there a way to make a concise proof that only requires linear overhead, like in bulletproof (unfortunately, Verification requires linear time). In addition, the risk of flaws in existing solutions has always existed. In general, the problem lies in the details rather than the basics.

Code obfuscation

The holy grail is to create an obfuscator O, so that given any program P, the obfuscator can generate a second program O(P) = Q, so that if the same input is given, P and Q will return the same output. Important Yes, Q shows that there is no information about P’s internals. You can hide the password and secret encryption key in Q, or you can simply use Q to hide the proprietary work of the algorithm itself.

Status: Progress is slow.

Simply put, the problem is that we want to find a way to “encrypt” the program, so that the encrypted program will still give the same output for the same input, but the “inside” of the program will be hidden. An example use case of confusion is a program that contains a private key, where the program only allows the private key to sign certain messages.

The solution to code obfuscation is very useful for blockchain protocols. The use case is subtle, because the possibility of obfuscating the program on the chain being copied and running in a different environment from the chain itself has to be dealt with, but there are many possibilities. One of my personal interests is to replace the operators of obfuscated programs that contain some proof of work, and then remove the centralized operators from the collusion-proof gadgets, which makes it possible to perform multiple runs with different inputs. The cost of determining the actions of a single participant is very high.

Unfortunately, this is still a difficult problem. The work to solve this problem is still continuing. On the one hand, the structure is established to try to reduce the number of assumptions about mathematical objects that we don’t know actually exist (for example: universal cryptographic multilinear mapping), on the other hand, it is trying to make the required mathematical objects Actual realization. However, all these paths are far from creating something feasible and known to be safe. For more overview of this issue, please see https://eprint.iacr.org/2019/463.pdf.

Hash-based cryptography

Question: Create a signature algorithm that does not rely on security assumptions, but instead relies on the random prediction properties of hashes to maintain 160-bit security against the optimal size and other properties of classic computers.

Status: Some progress.

Since 2014, two progresses have been made in this area. SPHINCS is a “stateless” (meaning that it does not need to remember information like a random number when used multiple times) signature scheme. It was released shortly after the list of “difficulties” was released and provided a A purely hash-based signature scheme with a size of approximately 41 kB. In addition, STARK has been developed, and people can create signatures of similar size based on them. In fact, not only signatures, but also universal zero-knowledge proofs can be achieved through hashing. I never thought of it five years ago; I am very happy. In other words, size is still an issue, and current progress is continuing to reduce the size of the certification (such as the recent DEEP FRI), although it seems that further progress will be gradual.

The main unsolved problem of hash-based cryptography is the aggregation of signatures, similar to the BLS aggregation that makes it possible. As we all know, we can create a STARK on many Lamport signatures, but this is inefficient; a more effective scheme would be welcome. (If you want to know whether hash-based public key encryption is feasible, the answer is, no, you can’t do anything more than the cost of a second attack).

Consensus theory issues

Proof of anti-ASIC work

One way to solve this problem is to create a proof-of-work algorithm based on a type of calculation that is difficult to specialize… For a more in-depth discussion of ASIC-resistant hardware, please see https://blog.ethereum.org/2014/06/ 19/mining/.

Status: Do our best.

Approximately 6 months after the “difficulties” list was released, Ethereum determined its ASIC-resistant proof-of-work algorithm: Ethash. Ethash is considered a hard memory algorithm. The theory is that the random access memory in ordinary computers has been well optimized, so it is difficult to improve it in specialized applications. The goal of Ethash is to achieve ASIC resistance by making memory access a major part of running PoW calculations.

Ethash is not the first hard memory algorithm, but it does add an innovation: it uses pseudo-random lookup on a two-level DAG, allowing two methods to calculate functions. First, if a person has the entire DAG (about 2GB), he can quickly calculate it; this is the “fast path” for memory difficulties. Second, if there is only the top layer of the DAG, then the calculation speed will be much slower (but still be able to quickly check a single solution); this is used for block verification.

Facts have proved that Ethash is very successful in resisting ASICs; after 3 years and billions of dollars in block rewards, ASICs do exist, but their performance and cost-effectiveness are at most 2-5 times that of GPUs. ProgPoW has been proposed as an alternative, but more and more consensus is that the life cycle of anti-ASIC algorithms is limited, and anti-ASIC also has shortcomings, because it makes 51% attacks cheaper (for example, see Ethereum Classic 51% of attacks).

Useful proof of work

Make a proof of work while making it useful; one candidate project is Folding@home, which is an existing program where users can download software to a computer to simulate protein folding and provide researchers with a large amount of data to help them treat diseases .

Status: It may not be feasible, but there is one exception.

The challenge of a useful proof of work is that the proof of work algorithm requires many attributes:

  • Difficult to calculate
  • Easy to verify
  • Does not rely on large amounts of external data

It can be calculated efficiently in small “bite-size” blocks.

Unfortunately, there are not many useful calculations that retain all these properties, and most calculations that have all these properties and are “useful” are only “useful” in too short a time to build a cryptocurrency around them.

However, there may be one exception: zero-knowledge proof generation. Zero-knowledge proofs of blockchain validity (for example, data availability roots) are difficult to calculate, but easy to verify. In addition, they have always been difficult to calculate; if the proof of “highly structured” calculations becomes too easy, one can simply switch to verifying the entire state transition of the blockchain, because the virtual machine and random memory access need to be modeled. Will become very expensive.

The zero-knowledge proof of the validity of the blockchain provides great value for blockchain users. It can replace the need to directly verify the chain; Coda is already doing this, despite the use of a simplified blockchain design, which is provable A lot of optimizations have been made. This kind of proof can significantly help improve the security and scalability of the blockchain. In other words, the total amount of calculations that actually need to be completed is still far less than the total amount of calculations currently completed by the proof-of-work miners, so this is at best an additional part of the proof-of-stake blockchain, not a complete consensus algorithm.

Proof of equity

Another way to solve the problem of mining centralization is to completely cancel mining and use other mechanisms to calculate the weight of each node in the consensus. So far, the most popular alternative in the discussion is “Proof of Stake”-that is, the consensus model is no longer regarded as “a unit of CPU power, one vote”, but “one currency unit, one vote” “.

Status Quo: Huge theoretical progress, waiting for more realistic assessments.

Towards the end of 2014, the proof-of-interest community clearly stated that some form of “weak subjectivity” is inevitable. In order to maintain economic security, nodes need to obtain the latest checkpoint extra protocol during the first synchronization. If they are offline for more than a few months, they also need to obtain the checkpoint extra protocol again. This is a pill that is difficult to swallow; many PoW advocates still insist on using PoW, because in the PoW chain, the “head” of the chain can be found, and the only data from a trusted source is the blockchain client software itself . However, PoS supporters are willing to swallow this pill because they believe that the increased trust requirements are not large. Since then, the way to obtain proof of equity through long-term margin has become clear.

Most interesting consensus algorithms today are basically similar to PBFT, but replacing the fixed set of validators with a dynamic list, anyone can send tokens to a system-level smart contract (with time-locked withdrawal) Come join this dynamic list. In some cases, the withdrawal may take 4 months to complete). In many cases (including Ethereum 2.0), these algorithms achieve “economic end” by punishing verifiers who violate the agreement in certain ways.

As of today, we have (among many other algorithms):

  • Casper FFG : https : //arxiv.org/abs/1710.09437
  • Tendermint:https://tendermint.com/docs/spec/consensus/consensus.html
  • HotStuff:https://arxiv.org/abs/1803.05069
  • Casper CBC:https://vitalik.ca/general/2018/12/05/cbc_casper.html

Continue to improve. Phase 0 of Eth2 will realize the FFG chain, which is currently being implemented and has made great progress. In addition, Tendermint has been running as a Cosmos chain for several months. In my opinion, the rest of the debate on proof of equity is related to optimizing economic incentives and further regulating strategies for responding to 51% attacks. In addition, the Casper CBC specification can still use specific efficiency improvements.

Proof of storage

The third way to solve this problem is to use scarce computing resources other than computing power or currency. In this regard, the two main alternatives that have been proposed are storage and bandwidth. In principle, there is no way to provide a post-encrypted proof of bandwidth being given or used, so the bandwidth proof should be considered a subset of the social proof most accurately, as discussed later, but the storage proof is definitely calculable. One advantage of Storage Proof is that it is completely ASIC resistant; our storage in the hard drive is close to optimal.

Status Quo: Many theories have progressed, although there is still much to be done and more realistic assessments are needed.

There are many blockchain plans to use storage proof protocols, including Chia and Filecoin. In other words, these algorithms have not been tested in the field. My main concern is centralization: Are these algorithms actually dominated by smaller users using spare storage capacity, or are they dominated by large mining farms?

economic aspect

Stable value of encrypted assets

One of the main problems with Bitcoin is price fluctuations. Problem: Construct a stable price of encrypted assets.

Status: Some progress has been made.

MakerDAO is now online and has been running stably for nearly two years. It survived a 93% drop in the value of its underlying collateral assets ( ETH ), and currently has issued more than $100 million in DAI. It has become the mainstay of the Ethereum ecosystem, and many Ethereum projects have been or are integrating with it. Other synthetic token projects, such as UMA, are also rapidly gaining support.

However, although the MakerDAO system has survived the difficult economic environment in 2019, this is by no means the most difficult situation that can happen. In the past, Bitcoin fell by 75% in two days; the same thing will happen to Ethereum or any other collateralized assets in the future. Attacks on the underlying blockchain are a greater untested risk, especially if they are compounded by price drops at the same time.

Another major challenge, arguably a bigger challenge, is that the stability of the MakerDAO system depends on certain underlying oracles. There are indeed different attempts at oracle systems, but how strong they can support under heavy economic pressure remains to be discussed. So far, the collateral controlled by MakerDAO has been lower than the value of MKR tokens; if this relationship is reversed, MKR holders may have a collective incentive to “loot” the MakerDAO system. There are some ways to prevent such attacks, but they have not been verified in real life.

Decentralized public goods incentives

Generally speaking, one of the challenges facing the economic system is the issue of “public goods”. For example, suppose there is a scientific research project that will cost 1 million US dollars to complete, and it is understood that if it is completed, the resulting research will save 1 million people 5 US dollars per person. In general, the social benefits are obvious…[But] from everyone’s point of view, contributions are meaningless…

So far, most of the problems of public goods involve centralization. Additional assumptions and requirements: There is a fully credible oracle to determine whether a public goods task has been completed (actually this is wrong, but this is another problematic area).

Status: Some progress has been made.

The financing of public goods is generally understood to be divided into two problems: the problem of funding (where do public goods get funding from) and the problem of preference aggregation (how to determine what is a real public good, rather than a pet project for a single person). This question pays special attention to the former, assuming the latter has been solved (for work on this problem, please refer to the section “Decentralized Contribution Measurement”).

In general, there is no new major breakthrough in this area. There are two main solutions. First, we can try to inspire individual contributions and give people social rewards for doing so. My own proposal for charity through marginal price discrimination is one example; the other is the anti-malaria donation badge on Peepeth. Second, we can collect funds from applications with network effects. Within the scope of the blockchain, there are several that can do this:

  • Issue coins
  • Collect a portion of transaction fees at the protocol level (e.g. through EIP 1559)
  • From certain layer2 applications (for example, Uniswap , or some scalable solutions, even state rent in the execution environment of Ethereum 2.0)
  • Charge some other fees (such as ENS registration)

Outside of the blockchain land, this is just an old question, if you are a government, how to collect taxes, if you are a business or other organization, how to charge.

Reputation system

Question: Design a formal reputation system, including a score rep(A,B) -> V, where V is the reputation of B from the perspective of A, a mechanism to determine the probability that one party can be trusted by the other, and A mechanism to update reputation given a record of a specific public or final interaction.

Status: Progress is slow.

Since 2014, there have not been many studies on reputation systems. Perhaps the best approach is to use a token escrow registry to create an escrow list of trusted entities/objects; Kleros ERC20 TCR (yes, this is a registry for legitimate ERC20 tokens) is an example, and there is even an alternative The interface of Uniswap (http://uniswap.ninja) is used as the backend to obtain a list of tokens, token codes, and logos.

The subjectively diverse reputation system has not really been tried, probably because there is not enough information about the “social graph” of people connecting with each other, and this information has been published in some form. If this information starts to exist for other reasons, then the subjective reputation system may become more popular.

Proof of Excellence

For the token distribution problem, there is an interesting, largely unexplored solution (which is why it can’t be used for mining so easily), which is to use tasks that are useful to society but require primitive humans Driven by creative effort and talent. For example, we can come up with a “proof proof” coin that rewards players who come up with mathematical proofs of certain theorems.

The status quo: No progress, the problem is basically forgotten.

The main alternative method of token distribution is airdrop; under normal circumstances, the token is issued either proportionally to the existing holdings of other tokens, or based on other indicators (e.g. handshake airdrop). Direct verification of human creativity has not been really tried, and with the recent progress of artificial intelligence, it may be too difficult to create a task that only humans can complete but computers can verify.

Anti-witch system

A problem related to reputation systems is the challenge of creating a “unique identity system”-a system that generates tokens to prove that an identity is not part of a witch attack… However, we hope that there is a better than “one dollar, one vote” , A more equal system; it can be said that one person, one vote is the ideal choice.

Status: Some progress has been made.

Many people have tried to solve this unique human problem. The attempts that come to mind include (incomplete list!)

  • HumanityDAO:https://www.humanitydao.org/
  • Pseudonym parties:https://bford.info/pub/net/sybil.pdf
  • POAP (“Proof of Attendance Protocol”): https://www.poap.xyz/
  • BrightID:https://www.brightid.org/

As people’s interest in technologies such as secondary voting and secondary grants continues to grow, so does the demand for some kind of human-based anti-witch system. It is hoped that the continuous development of these technologies and new technologies can meet this requirement.

Decentralized contribution indicator

Unfortunately, stimulating the production of public goods is not the only problem that centralization solves. Another problem is to first determine which public goods are worthy of production, and secondly, to determine the extent to which a certain effort has actually completed the production of public goods. This challenge involves the latter issue.

Status: There has been some progress, and the focus has changed.

Recent work on determining the value of public welfare contributions does not attempt to separate the determination of the task and the determination of the quality of completion; the reason is that the two are difficult to distinguish in practice. The work done by a specific team is often irreplaceable and sufficiently subjective, so the most reasonable approach is to treat the relevance and performance quality of the task as a separate package and use the same technique to evaluate both.

Fortunately, great progress has been made in this area, especially the discovery of secondary grants. The secondary grant means that individuals can donate to the project, and then use a formula to calculate how much they will donate if they are fully coordinated with each other (that is, what is the amount of donation) based on the number of donations and the amount of donation. Considering each other’s interests and not becoming victims of the tragedy of the commons).

For any particular project, the difference between the amount that may have been donated and the amount actually donated will be provided to the project as a subsidy from a central fund pool. It should be noted that this mechanism focuses on satisfying the value of certain communities, rather than satisfying certain given goals, regardless of whether anyone cares about it. Due to the complexity of the value problem, this method may be more stable for unknown unknowns.

Among the recent Gitcoin secondary grants, secondary grants have even been tried in real life and have achieved considerable success. Some progress has also been made in improving secondary grants and similar mechanisms; in particular, secondary grants in pairs are bounded to reduce collusion. Work has also been done to standardize and implement anti-bribery voting technologies to prevent users from proving to third parties who they voted for; this prevents a variety of collusion and bribery attacks.

Decentralized success indicators

Question: Propose and implement a decentralized method to measure real-world numerical variables… The system should be able to measure anything that humans can currently reach a rough consensus on (for example, asset prices, temperature, global carbon dioxide concentration).

Status: Some progress has been made.

This is now commonly referred to as the “oracle problem”. The largest known operating instance of a decentralized oracle is Augur, which has processed millions of dollars in pledge results. The token management registry (such as Kleros TCR for tokens) is another example. However, due to a highly controversial issue or due to an attempted 51% attack, these systems still have not seen a real-world test of the bifurcation mechanism. There are also studies on oracles that occur outside the blockchain space, appearing in the form of “peer prediction” literature.

Another imminent challenge is that people want to rely on these systems to guide the transfer of assets greater than the economic value of the system’s native tokens. In this case, theoretically, token holders have an incentive to collude to give wrong answers to steal funds.

In this case, the system will fork and the original system tokens may become worthless, but holders of the original system tokens can still get rewards from any asset transfers they misdirected. Stable coins are an example. One way to solve this problem is to build a system that assumes that selfless and honest data providers do exist, create a mechanism to identify them, and only allow them to stir slowly. If there is a malicious vote in the user system, it depends on The user of the oracle can first complete an orderly exit. In any case, the further development of oracle technology is a very important issue.

New problem

If I write the list of difficult problems again in 2019, some of them are the continuation of the above problems, but the focus will be greatly changed, and there will also be big new problems.

Here are some questions:

  • Password obfuscation: Same as item 4 above.
  • Ongoing work on post-quantum cryptography: hash-based and post-quantum security-based “structured” mathematical objects, for example. Elliptic curve structure…
  • Anti-collusion infrastructure: ongoing work and improvement https://ethresear.ch/t/minimal-anti-collusion-infrastructure/5413, including adding privacy for operators, adding multi-party computing in the most practical way, etc.
  • Oracle: Same as #16 above, but no longer emphasizes “success indicators” and focuses on the general “obtain real world data” problem.
  • Unique human identity (or, more realistic, semi-unique human identity): Same as #15 above, but emphasizes a less “absolute” solution: getting two identities should be better than getting one It is much harder, but making it impossible to obtain multiple identities is neither possible nor harmful, even if we succeed.
  • Homomorphic encryption and multi-party computing: For practicality, continuous improvement is still needed.
  • Decentralized governance mechanism: DAO is cool, but the current DAO is still very primitive; we can do better.
  • Fully formalize the response to PoS 51% attacks: see https://ethresear.ch/t/responding-to-51-attacks-in-casper-ffg/6363.
  • More sources of funding for public goods: The ideal approach is to charge for crowded resources in systems with network effects (such as transaction fees), but doing so in a decentralized system requires public legitimacy; therefore, this is both a social problem , Is also a technical problem to find possible sources.
  • Reputation system: Same as Article 12 above.

Generally speaking, the problems of the base layer are slowly but surely decreasing, but the problems of the application layer have just begun.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/v-gods-2019-and-2021/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Leave a Reply