Editor’s Note: When it comes to the strongest cryptography technology born in the past decade, it is inevitable to mention zero knowledge proof. In the blockchain space, they have two major application scenarios: scalability and privacy. As one of the zkp technologies, zk-SNARK has also made great breakthroughs in recent years . General scaling solutions using zk-SNARK proofs such as zkSync , Scroll , and Hermez have appeared on Ethereum .

However, implementing zk-SNARKs is not trivial because after generating a zk-SNARK proof for a claim, the verifier needs to somehow check millions of steps in the computation. This is where polynomial commitments come into play, i.e. computations can be encoded as polynomials, and then a special type of polynomial “hash” (polynomial commitment) is used to allow verifiers to verify polynomial equations in an extremely short amount of time, even if The polynomials behind these calculations are incredibly large.

In this article, Vitalik describes how zk-SNARK technology works and the difficulties, then explains how polynomials and polynomial commitments make zk-SNARK implementations more efficient.

Special thanks to Dankrad Feist , Karl Floersch , and Hsiao-wei Wan for their feedback and proofreading.

The strongest cryptography technology born in the past decade is probably **general succinct zero-knowledge proofs, commonly known as zk-SNARKs (zero knowledge succinct arguments of knowledge)** . **zk-SNARKs allow you to generate a proof (a proof for a specific output of some operation that can be used to verify that operation) in such a way that the proof can be quickly verified even if the underlying computation is time-consuming. “ZK” (zero knowledge) adds an extra feature to proofs: proofs can hide certain inputs to computations.**

For example, you can generate a proof for the following statement: “I know a secret value, and if you add a number to the end of the word cow you picked, and then hash it 100 million times with SHA256, the output hash will start with 0x57d00485aa”. The time required for the verifier to verify the proof can be much less than the time required to perform 100 million hash operations on its own, and the proof does not leak the secret value.

In the field of blockchain, the technology has two major application scenarios:

**Scalability**: If verification of a block is time-consuming, one person can verify the block and generate a proof, while others can quickly verify the proof**Privacy**: You can prove that you have the right to transfer certain assets (you received the asset and you haven’t moved it) without revealing the origin of the asset. This will not leak the information of the two parties to the transaction and ensure the security of the transaction.

However, zk-SNARKs are quite complex; in fact, as recently as 2014-17, they were often referred to as “moon math”. The good news is that since then, the protocols have been simplified and our understanding of them has grown. This blog post will try to explain how ZK-SNARKs work in a way that anyone with an average level of mathematics can understand.

Note that we’ll focus on scalability; privacy for these protocols is relatively easy to achieve once scalability is in place, so we’ll return to this topic at the end.

**Why ZK-SNARKs “would” be hard**

Take the opening example: we have a number (we can encode the whole “cow” with the secret input at the end as an integer), we compute the SHA256 hash of that number, then repeat it 9999999 times, and finally we check the beginning of the output . The amount of calculation here is very large.

A “succinct” proof is one where proof size and verification time grow much slower than the computation required to verify. If we want a “succinct” proof, we can’t ask the verifier to do some computation in each round of hashing (because then, the verification time will be proportional to the amount of computation). Instead, the verifier must somehow check the entire operation without peering into every part of the operation.

A natural technique is random sampling: have the verifier check the correctness of the operation only in 500 different places, and if all 500 checks pass, then consider the rest of the operation to be ok with a high probability?

This process can even be transformed into a non-interactive proof by using the **Fiat-Shamir heuristic** : the prover computes the Merkle root of the operation, picks 500 indices pseudorandomly based on the Merkle root, and provides The corresponding 500 Merkel branches. The core idea is that the prover does not know which branches will be revealed until they have generated a “commit” to the data. If a malicious prover tries to tamper with the data after knowing which indexes to check, then this will change the value of the Merkle root, which will cause a new set of random indexes to be elected, which will require tampering with the data again…allowing the malicious The prover is stuck in an endless loop, unable to achieve its purpose.

Unfortunately, there is a fatal flaw in simply randomly spot-checking the computing process: the computing process itself is not robust. If a malicious prover flips a bit at some point in the computation, it can lead to a completely different result that a random sample verifier will almost never find out.

Just a single intentional insertion error can cause the computation to give a completely wrong result, which is hardly caught by random checks.

If a zk-SNARK protocol were to be proposed, many people would go to the top, get stuck, and give up. How on earth can a validator verify each computational fragment without looking at each computational fragment individually? As it turns out, there is a neat solution.

**polynomial**

Polynomials are a special class of algebraic expressions that have the following form:

*x*+5*x*4*x*3- +3
*x*2 - +3
*x*+1 - 628
*x*271 - +318
*x*270 - +530
*x*269 - +…+69
*x*+381

That is, they are a finite sum of terms of the form *cxk* .

Polynomials have many interesting properties. But here, we’ll focus on one property of polynomials: **a polynomial is a single mathematical object that can contain an infinite amount of information (a polynomial can obviously be thought of as a list of integers)** . The fourth example above contains 816 bits of information for tau, and we can easily imagine a polynomial with more information.

Furthermore, **a single equation between polynomials can represent an infinite number of equations** . For example, consider the equation *A* ( *x* ) + *B* ( *x* ) = *C* ( *x* ). If this equation is correct, then the following is also true:

*A*(0)+*B*(0)=*C*(0)*A*(1)+*B*(1)=*C*(1)*A*(2)+*B*(2)=*C*(2)*A*(3)+*B*(3)=*C*(3)

The analogy goes to every possible subscript. You can even construct polynomials that specifically represent a set of numbers, so you can check many equations at once. For example, say you want to check:

- 12 + 1 = 13
- 10 + 8 = 18
- 15 + 8 = 23
- 15 + 13 = 28

You can use a method called Lagrangian interpolation to construct polynomials: *A* ( *x* ) over some specific set of subscripts (e.g. (0, 1, 2, 3) has the output (12, 10, 15, 15)), the output of *B* ( *x* ) at the same subscript is (1, 8, 8, 13), and so on. To be precise, the following are the corresponding polynomials:

*Checking the equation A* ( *x* ) + *B* ( *x* ) = *C* ( *x* ) with these polynomials is equivalent to checking the above four equations at the same time.

**Compare a polynomial to itself**

Even, you can use a simple polynomial equation to check the relationship between a large number of adjacent values of the same polynomial. This goes a little further. Suppose you want to check that for a given polynomial *F , **F* ( *x* +2) = *F* ( *x* ) + *F* ( *x* +1) is satisfied over the integer range {0,1…98} (so if you also check Test *F* (0) = *F* (1)=1, then *F* (100) will be the 100th Fibonacci number).

As a polynomial, *F* ( *x* +2)− *F* ( *x* +1)− *F* ( *x* ) will not be exactly zero because it is not restricted to values outside the range *x ={0,1…98}. *But we can do something clever. In general, there is such a rule: if the polynomial *P* is zero on some set *S* = { *x* 1, *x* 2… *xn }, then it can be expressed as **P* ( *x* ) = *Z* ( *x* )∗ *H* ( *x* ) where *Z* ( *x* )=( *x* − *x* 1)∗( *x* − *x* 2)∗…∗( *x* − *xn*), and *H* ( *x* ) is also a polynomial. **In other words, any polynomial with value zero on some set can be represented as a (polynomial) multiple of the simplest (lowest order) polynomial with value zero on the same set.**

Why is this happening? This is actually a neat corollary to polynomial long division: the factor theorem. We know that when dividing *P* ( *x ) by Z( **x* ) , we will get the quotient *Q* (x) and the remainder *R* (x) such that *P* ( *x* ) = *Z* ( *x* )∗ *Q* ( *x* )+ *R* ( *x* ), where the remainder *R* ( *x* ) is of order strictly less than *Z* ( *x* ). Since we know that the polynomial *P* has a value of zero on the set *S , this means that the polynomial **R* has to have a value of zero on the set *S* as well. Therefore, we can simply compute *R* ( *x* ) by polynomial interpolation, since it is an order of at most *n*−1 polynomial of which we know *n* values ( zero on set *S ). *Interpolating with all of the above zeros results in a zero polynomial, so *R* ( *x* )=0, *H* ( *x* ) = *Q* ( *x* ). (Translator’s note, a polynomial of degree <= *n −1 with **n* solutions must be a zero polynomial)

*Going back to our example, if we have a polynomial F* that encodes Fibonacci numbers (thus, on *x* = {0,1…98} such that *F* ( *x* +2) = *F* ( *x* ) + *F* ( *x* +1 ) , *then* I can *prove* to *you* that *F* does *satisfy* the *_ **_ **_* Condition: *H* ( *x* )=( *F* ( *x* +2)− *F* ( *x* +1)− *F* ( *x* ))/Z ( x ) , where*Z*(*x*)=(*x*−0)∗(*x*−1)∗…∗(*x*−98).

You can calculate *Z* ( *x* ) yourself (ideally calculated ahead of time), check the equation, and if the check passes, then *F* ( *x* ) does satisfy the condition!

Now, take a step back and notice what we did. We convert a calculation in steps of 100 (calculating the 100th Fibonacci number) into a single polynomial equation. Of course, proving the value of the *Nth* Fibonacci number doesn’t make much sense, especially since Fibonacci numbers have closed form. But you can use the exact same underlying technique, with just a few extra polynomials and more complex equations, to encode arbitrary computations with arbitrary steps.

Now, if there were a way to check the equation using polynomials, and it was much faster than checking every coefficient…

**Polynomial Commitment**

Again, it turns out that such a method exists: **polynomial commitments** . **It’s best to think of polynomial commitments as a special way of “hashing” polynomials** with the additional property **that you can check equality between polynomials by checking equality between polynomial hashes** . Different polynomial commitment schemes have different characteristics in applicable equation types.

Here are some common examples where you can use various polynomial commitment schemes (we use com(P) to mean “commitment to polynomial *P* “):

**Addition**: Given*com*(*P*),*com*(*Q*), and*com*(*R*), check if*P*+*Q*=*R***Multiplying**: Given*com*(*P*),*com*(*Q*), and*com*(*R*), check if*P*∗*Q*=*R***Evaluate at some point**: Given*com*(*P*),*w*,*z*, and a complementary proof (or “witness”)*Q*, verify that*P*(*w*) =*z*

Notably, these primitives can be combined with each other. If addition and multiplication are supported, then you can calculate this: To prove that *P* ( *w* )= *z* , you can construct *Q* ( *x* )=

*(P* ( *x* ) − *z)/(* x − w), then the verifier can verify:

This works because if there exists such a polynomial *Q* ( *x* ), then *P* ( *x* )− *z* = *Q* ( *x* )∗( *x* − *w* ), which means that *P* ( *x* )− *z* is equal to zero at *w* (because *x* − *w* equals zero at *w ), so **P* ( *x* ) equals *z at **w* .

If you can calculate the value of a point, then you can perform various checks. This is because there is a numerical theorem that states that, **in general, if some equations involving some polynomials hold under a randomly chosen subscript, then the equations are basically true for the polynomial as a whole** . So if we only have a mechanism for proving the value of a point, then we can verify it through an interactive game such as our equation *P* ( *x* +2)− *P* ( *x* +1)− *P* ( *x* ) = *Z* ( *x* )∗ *H* ( *x* ):

As mentioned earlier, we can use the Fiat-Shamir heuristic to make it non-interactive: the prover can set r=hash(com(P), com(H)) (where hash is any cryptographic hash function ; it does not need to have any special properties) and computes r. Provers cannot “fraud” by picking P and H that only “fit” a particular r, because they didn’t know r when they picked P and H!

**Quick review**

- ZK-SNARKs are hard because validators need to somehow check millions of steps in the computation without directly checking each individual step (as this would be time-consuming).
- We solve this problem by encoding the computation as a polynomial.
- A single polynomial can contain an infinite amount of information, and a single polynomial expression (for example,
*P*(*x*+2)−*P*(*x*+1)−*P*(*x*) =*Z*(*x*)∗*H*(*x*)) can represent infinite equations between numbers. - If you can verify polynomial equations, you are also implicitly verifying all numerical equations (replacing the x in the polynomial with any actual
*x**subscript*) . - We use a special type of polynomial “hash”, called a polynomial commitment, to allow us to verify polynomial equality in a very short time, even if the polynomial size behind it is very large.

**So, how do these magical polynomial hashes work?**

There are currently three widely used mainstream solutions: **bulletproofs, Kate and FRI** .

- Here is Dankrad Feist’s description of Kate’s commitment: https://dankradfeist.de/ethereum/2020/06/16/kate-polynomial-commitments.html
- Here is the description of bulletproofs by the curve25519-dalek team: https://doc-internal.dalek.rs/bulletproofs/notes/inner_product_proof/index.html, and here is my illustration: https://twitter.com/VitalikButerin/ status/1371844878968176647
- Here is a description of FRI, I wrote it…: https://vitalik.ca/general/2017/11/22/starks_part_2.html

**Whoa, whoa, don’t be nervous. Try to explain one of them briefly, don’t lead me to the more scary links**

Honestly, they are not that simple. That’s why all these mathematical theories didn’t really take off until around 2015.

**Please?**

In my opinion, FRI is the easiest to understand (Kate is easier to understand if you’re willing to think of elliptic curve pairings as black boxes, but pairings are really complicated, so overall I think FRI is simpler) .

Below is a simplified version of how FRI works (for simplicity, without many of the tricks and optimizations in the real protocol). Suppose you have a polynomial *P* of order less than n . The commitment to *P* is the Merkle root of some set of preselected subscript values (eg, {0,1…8 *n* −1} , although this is not the most efficient way to choose). Now we need to add something extra to prove that the set of values comes from a polynomial of order less than *n . *(Translator’s Note, the reason why it is necessary to verify that the polynomial order is less than *n* is to ensure that the probability of the perpetrator passing random sampling is as low as possible. Given a polynomial *P* , another polynomial *Q* that basically fits *P* can also pass random sampling, but *Q* is characterized by its order being large enough to be successful)

We ask the prover to provide the Merkle roots of *Q* ( *x* ) and *R* ( *x ). *Then, we generate a random number *r* and ask the prover to provide a “random linear combination” *S* ( *x* ) = *Q* ( *x* ) + *r* ∗ *R* ( *x* ).

We pseudo-randomly sample a large set of subscripts (using the provided Merkle root as a random seed, as before), and ask the prover to provide the Merkle branches of *P* , *Q* , *R* , and *S* at these subscripts . At each provided subscript, we check:

If we do enough checking, then we can be confident that the “expected” value of *S* ( *x* ) is different from the “provided” value at most (eg, 1%).

From here, we simply repeat the game above with *S* , gradually “lowering” the order of the polynomials we care about, until the polynomials are low enough that we can check directly.

As in the previous example, “Bob” here is an abstract concept that is very useful for cryptographers to mentally reason about protocols. In fact, Alice generated the full proof herself, and to prevent her from cheating, we use Fiat-Shamir: we randomly pick the subscript or *r* -value of each sample based on the hash of the data generated in the previous proof.

A complete “FRI commitment” to *P* (in this simplified agreement) includes:

Each step in the process may introduce some “error”, but if you check enough, the total error will be low enough that you can show that *P* ( *x ) is equal to an order less than **n* at least 80% of the subscripts polynomial of . This is sufficient for our use case: if you want to cheat in zk-SNARKs, you need to make polynomial commitments on a few values, whose polynomials will take different values than true polynomials of order less than *n* in many places , so any Attempts that FRI promises will fail.

Also, you double check that the total number and size of objects in the FRI promise is O(log order), so for large polynomials, the promise is actually much smaller than the polynomial itself.

To check for equality between such different polynomial commitments (eg, given the FRI commitments for *A* , *B* , and *C , check that **A* ( *x* ) + *B* ( *x* ) = *C* ( *x* ) ), simply randomly select many subscripts, require the verifier to provide the Merkle branch at these subscripts of each polynomial, and verify that the equation holds at each subscript.

The one described above is an extremely inefficient protocol; there are a lot of algebraic tricks that can make the protocol about 100x more efficient, and you need to use these tricks if you want a practical protocol, such as for use in blockchain transactions. In particular, for example, *Q* and *R* aren’t actually necessary, because if you choose your value points very smartly, you can reconstruct the desired value of *Q* and *R* directly from the value of P. But the description above should be enough to convince you that polynomial commitments work in principle.

**finite field**

Modulo arithmetic is sometimes called “clock math” because of the way numbers “loop back”

By modulo arithmetic, we have created an entirely new arithmetic system that is as self-consistent as conventional arithmetic. Therefore, we can discuss all homogeneous structures in this domain, including the polynomials we discussed in “General Mathematics”. Cryptographers like to use modular mathematics (or, more generally, “finite fields”) because the size of the result of any modular operation will be bounded – no matter what you do, the value will never “exceed” the set {0,1,2 … *p* −1}. Even computing a one-millionth degree polynomial in a finite field will not yield a value outside the set.

**A more meaningful example of turning a calculation into a set of polynomial equations?**

**privacy**

But here’s a question: how do we know that the commitments to *P* ( *x* ) and *R* ( *x* ) don’t “leak” information (enabling revealing the exact value of *P* (64) that we’re trying to hide)?

Here’s some good news: **these proofs are small proofs generated for large amounts of data and computation. So, in general, proofs are often not large enough to reveal only a little bit of information** . But can we go from “just a little” to “zero”? Fortunately, we can.

A fairly general trick here is to add some “fuzz factor” to the polynomial. When we choose P, add small multiples of *Z* ( *x* ) to the polynomial (ie, take a random *E* ( *X* ), let *P* ′( *x* ) = *P* ( *x* ) + *Z* ( *x* )∗ *E* ( *x* ) ). This doesn’t affect the correctness of the proposition (in fact, *P ‘ has the same value as **P* at the subscript “where the computation happens” , so it’s still a valid escaped version), but it can add enough to the promise A lot of extra “noise”, making any remaining information unrecoverable. Also, for FRI, it is important not to sample the random points where the calculation occurs ({0…64} in this case). (Translator’s note, so that even the same polynomial will be unguessable)

**Can we look back again? ?**

- The best three types of polynomial promises are FRI, Kate, and bulletProof.
- Kate is conceptually the simplest, but it relies on a very complex “black box” of elliptic curve pairings.
- FRI is cool because it only relies on hashing; it works by gradually reducing polynomials to polynomials of lower and lower order, checking random samples at each step, using Merkle branches to prove equivalence.
- To prevent inflating the size of individual values, instead of doing arithmetic and polynomial operations on integers, we do everything on finite fields (usually modulo some prime p)
- Polynomial commitments naturally support privacy protection because the generated proofs are already much smaller than polynomials, so polynomial commitments can only reveal a little bit of information in the polynomial. But we can add some randomness to the polynomial to reduce the exposed information from “a little” to “zero”.

**Which questions are still being researched?**

**Optimizing FRI**: There have been many optimizations involving carefully chosen ranges, “DEEP-FRI”, and a bunch of other tricks to make FRI more efficient.**Starkware and others are working on this.****A better way to encode computations as polynomials**: Figuring out the most efficient way to encode complex computations involving hash functions, memory accesses, and other features as polynomial equations remains a challenge. Much progress has been made on this (see PLOOKUP, for example), but we still need more progress, especially if we want to encode**general-purpose virtual machine execution**as polynomials.**Incremental Verifiable Computation**: It would be nice to be able to efficiently “scale” proofs as computations continue. This is valuable in a “single prover” situation, but also in a “multi-prover” situation, especially for blockchains where different participants create blocks. For some recent work on this, see Halo.

Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/vitalik-how-do-polynomial-commitments-make-zk-snark-implementations-more-efficient/

Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.