Newsletter: April - The Investor's Guide to Zero-Knowledge Proofs and Blockchain Scaling Solutions
Updated: Jun 25
Blockchain technology has gained significant attention in recent years, primarily due to the rise of cryptocurrencies like Bitcoin and Ethereum. These digital assets have demonstrated the potential for decentralized financial systems and programmable trust. However, as these networks grow in popularity, they face a fundamental challenge - scalability. In this edition of the newsletter, we will discuss the inherent tradeoffs between scalability and maintaining decentralization, and dive into one of the hottest solutions at the forefront of tackling this issue: zero-knowledge technology and ZK/validity rollups.
The Scalability Trilemma
The key feature of a blockchain is its decentralized nature. A network of nodes maintains and validates the entire transaction history, ensuring transparency, security, and immutability. However, this design has an inherent tradeoff - as the number of participants in the network grows, so does the amount of data to be processed, stored, and transmitted, leading to increased latency and reduced throughput. This is known as the scalability trilemma, which describes the difficulty of achieving decentralization, security, and scalability simultaneously.
There are several factors contributing to this dilemma:
Consensus Mechanisms: Most blockchains use consensus mechanisms like Proof of Work (PoW) or Proof of Stake (PoS) to secure the network and validate transactions. These mechanisms require significant computational resources, energy, and time, limiting the number of transactions that can be processed per second.
Block Size and Block Time: To maintain consistency across the network, transactions are bundled into blocks and added to the blockchain at regular intervals. However, increasing the block size or reducing the block time to accommodate more transactions may lead to issues such as increased propagation times and centralization risks.
Network Latency: As the number of nodes in a decentralized network grows, so does the time required to propagate information across the network. This can result in longer transaction confirmation times and a reduced capacity to handle large transaction volumes.
The scalability trilemma, illustrated. Credits: Vitalik Buterin
A blockchain can achieve two of the above traits but at the expense of the third. Many alternative layer 1 (L1) chains have chosen to sacrifice decentralization for scalability and security. However, it’s important to remember why decentralization is important. It provides the chain anti-fragility, robustness, reliability, and censorship resistance.
In the cryptocurrency world, something is trustless if users do not have to rely on third parties or intermediaries (like banks) to control their funds. The goal is to increase the number of transactions possible while retaining sufficient decentralization.
Generally speaking, other blockchain efforts (outside of Ethereum) to increase TPS have focused on one/all of the following:
Speeding up consensus (allowing nodes to agree on the order of transactions faster),
Increasing block sizes (more data per block), and
decreasing block times (more blocks per minute)
Although often discussed as such, blockchain scalability does not just pertain to TPS. Many L1s, like BNB Chain, currently boast high TPS numbers but suffer from “chain bloat” and ever-increasing hardware requirements just to keep the chain running. L1s must be able to process more transactions without creating more problems down the road. A node in a technically sustainable blockchain has to do three things:
Keep up with the tip of the chain (most recent block) while syncing with other nodes.
Be able to sync from genesis in a reasonable time (days as opposed to weeks).
Avoid state bloat.
Implementing one/all of these parameter tweaks previously mentioned has generally been the approach for most next-generation “Ethereum Killers”: BNB Chain, Avalanche, Solana, Fantom, Algorand, etc. And while it has improved TPS by nearly 100x, this still means these chains can (mostly) only achieve TPS into the single-digit thousands (<10k). However, that simply will not suffice should these projects reach global adoption. For these platforms to accommodate growth, they all will have to resort to increasing hardware requirements within their system.
The computational “TPS ceiling” within modern-day monolithic chains is being realized. Monolithic refers to a blockchain in which every node performs all parts of the blockchain: execution, consensus, and data availability.
Execution refers to the computation of transactions. The execution layer is the user-facing layer where transactions get executed.
Consensus refers to ordering transactions and nodes coming to an agreement on the state.
Data availability guarantees blocks are fully published to the network. The consensus layer plus data availability guarantees all blockchain data is published and accessible to anyone.
So, if monolithic chains aren’t the solution, what is? The current industry approach is scaling in layers. Enter layer 2s and rollups.
Layer 2 Solutions: Rollups
To address the scalability challenge, blockchain developers have proposed various layer 2 solutions that operate on top of the existing blockchain infrastructure. One promising approach is the use of rollups, which aggregate multiple transactions off-chain and then commit a single proof on-chain. This method reduces the on-chain data storage and computation requirements, improving scalability without sacrificing decentralization or security.
There are two primary types of rollups: optimistic rollups and zk-rollups.
Optimistic Rollups (ORs): In contrast, optimistic rollups rely on the assumption that all off-chain transactions are valid unless proven otherwise. When a batch of transactions is submitted to the blockchain, it is temporarily accepted, and anyone can challenge the validity of the transactions within a certain timeframe. If no challenges are raised, the transactions are considered valid. This approach trades some security for increased scalability, as fewer cryptographic proofs are required. Arbitrum and Optimism are ORs and the two leading rollup solutions today. Other OR protocols include Boba, Metis, Mantle, Fuel, and Base.
zk-Rollups (ZKRs): These utilize zero-knowledge proofs, a cryptographic technique that allows one party to prove the validity of a statement without revealing any additional information. zk-rollups bundle multiple transactions together, generate a proof, and submit it to the blockchain. This approach ensures that the off-chain computation remains secure and private while significantly reducing on-chain data requirements.
ZKRs are incredibly promising but extremely difficult to build compared to their OR counterparts. For this reason, they are earlier in their development and adoption curve. Most ZKR projects are being built atop Ethereum. Popular Ethereum ZKR projects include zkSync, Polygon, Starkware, Aztec, Scroll, Linea, and others. However, although we will focus on Ethereum ZKRs in this piece, it should be noted that other L1 protocols like Near and Tezos are rolling out their own implementations.
In Ethereum land, numerous projects have chosen to employ airdrops as a means to kickstart engagement and attract further investment within their ecosystem, with Arbitrum’s airdrop in March 2023 being the latest high-profile example. After Arbitrum's debut, ZK initiatives appear to be the latest investment trend, with users bridging assets over to ZK projects. zkSync’s new zkEVM (Era) gained $100 million+ in TVL in just a few weeks.
In the realm of digital assets, any headline-worthy event can be transformed into a potential storyline. With the recent mainnet launches of Polygon's zkEVM and zkSync Era by both Polygon and zkSync in March, airdrop enthusiasts have shifted their focus towards these networks. Other Zero Knowledge endeavors, such as Scroll and Starknet, which are currently in their testnet stages, also present opportunities for users and are expected to attract capital from those seeking potential airdrops.
Prior to 2022, ZKRs were largely narrow in scope and only useful for specific actions, like trading, rather than the general purpose utility Ethereum is known for. This allowed early projects like dydx, zk.money, and StarkEx to see some success. However, for them to truly grow into a meaningful size and also help scale all of Ethereum, projects understood that they would need to become more flexible and compatible with Ethereum overall. This ushered in the idea of a zKEVM and from 2022 onwards, the space has seen an incredible race to launch this exciting technology.
Zero-knowledge Tech and zkEVMs
The conventional ZKR construction is built atop Ethereum mainnet and consists of two on-chain smart contracts: the "main" contract and a verifier contract. The main contract stores rollup blocks, monitors the state of the blockchain, and keeps track of deposits and withdrawals. The verifier contract verifies the ZKPs transmitted to Ethereum.
With the two contracts in place and a proving system, you have the three critical components for a ZKR.
The Execution Environment (the off-chain virtual machine): While the ZK-rollup protocol operates on Ethereum, the actual execution of transactions and state storage occurs within a separate VM independent of the Ethereum Virtual Machine (EVM). This off-chain VM functions as the execution environment for ZK-rollup transactions and constitutes the second layer or "layer 2" of the ZK-rollup protocol. Ethereum Mainnet validates the correctness of state transitions within the off-chain VM via verified validity proofs.This is where the smart contracts of the software execute. With ZK rollups, the execution environment is oftentimes different than the traditional Ethereum Virtual Machine (EVM). While some, like Scroll, Polygon, zkSync 2.0, etc., are working on a general-purpose zkEVM, current rollup implementations (StarkNet, Aztec, zkSync v1) have their own distinct execution environments. This is where the output is a new state derived from the initial and current transaction.
The Proving Circuit - With the proving circuit, transactions in the EM are validated with zero-knowledge proofs such as SNARKs, STARKs, POLNK, etc. Pre-state data, the transactions and execution, and post-state data are utilized by the circuit to complete the proof generation process. In this manner, the prover provides a simple proof of the state transition's veracity.
The Verifier Contract - The verifier contract conducts calculations to the provided proof to verify that the outputs were accurately created from the inputs.
State commitments within the ZK-rollup, which include layer 2 accounts and balances, are represented as a Merkle tree. The cryptographic hash of the Merkle tree's root (Merkle root) is stored in the on-chain contract, enabling the rollup protocol to track state changes within the ZK-rollup.
ZKRs can batch together thousands of off-chain transactions, execute the computation in their own virtual machine, and then post the batch to mainnet along with a “validity proof.” Validity proofs are a critical aspect of ZK-rollups, as they provide a cryptographically secure method of verifying the correctness of batched transactions. They allow for the proof of the correctness of a statement without disclosing the statement itself, hence the term "zero-knowledge proofs." ZK-rollups employ validity proofs to confirm the accuracy of off-chain state transitions without re-executing transactions on Ethereum. The validity proof is typically a “SNARK” or “STARK” ( discussed in detail below) that has already computed the state of the L2 and is sent to mainnet for storage.
Provers and Validators There are two important actors in a ZKR: provers and validators/verifiers. “Provers” (or sequencers) are a small set of nodes that run this specialized hardware, compute all the transactions, and compile them into a much smaller ZK proof. Typically, they are not very transparent and/or auditable, but users can sleep easily because, thanks to the cryptography involved, it is mathematically impossible to forge an invalid ZK proof. Validators are a much larger, easier-to-run set of nodes that verify the validity of the ZK proof submitted by the provers. This group serves to hold the provers accountable and ensure censorship resistance.
Instead of a lengthy challenge model with fraud proofs, as is the case for Optimistic rollups (ORs), ZKRs involve a quicker validation period through their validity proof security model, which generates the proof up front as soon as blocks are submitted. From there, the proof can be quickly verified on the L1, allowing for fast user withdrawals. Provers work as aggregators for ZKRs.
The operator (mostly called sequencer) then aggregates many transactions using compression techniques into batches and submits the batches to Layer 1. The proof is far less data-heavy than if the L1 had to redo all the computation itself. The “batch” that’s rolled up is periodically posted to mainnet Ethereum and contains the net outcomes of many different transactions as they occurred on the rollup layer. This data is verified and updated by the rollup operator every time the L2 advances its state. Therefore, L2 execution and L1 data update in lockstep.
The data posted to L1 includes:
A Merkle tree of all transactions in a batch/block, including users’ balances, accounts, etc.
A hash of the Merkle tree root for transactions that attest to the inclusion in the block
The batch root, which contains the Merkle tree root of all transactions in a batch
A set of intermediate state roots representing the state after the transaction
ZK Proofs (ZKPs)
In general, a ZK/validity proof is a cryptographic method of transaction verification in which a prover develops a proof for specific information, and a verifier validates the proof. A ZKP allows someone to publicly verify that they possess specific information without revealing the specifics or details of that information. As a result, Zero-Knowledge cryptographic proofs provide phenomenal privacy features as well as reduce the computing and storage resources for validating the block by reducing the amount of data held in a transaction (since zero knowledge of the entire data is needed).
The "witness" is the knowledge shared (the data) between a prover and a verifier. The prover must prove that they know the witness accurately, and the verifier must be able to assess whether the proponent has knowledge of the witness.
Validity proofs are complex and rely on polynomial commitments. In polynomial commitments, information from each stage of a verification calculation is encoded as polynomials. By checking the polynomial equations, you indirectly ascertain the numerical calculations, but the process for hashing these polynomials is challenging. The top three polynomial hashing algorithms for polynomial commitments are:
KZG (Kate) Polynomial Commitments (most ZKRs)
ZK proofs can be used for various purposes: a) anonymous payment service, b) allowing access to services without revealing personal data, c) proving statements on personal data, and d) enabling trustless computing services.
ZK-rollups are (theoretically) faster and more efficient than Optimistic rollups, but they suffer from friction and compatibility issues when migrating smart contracts to Layer 2. This is because Ethereum was not originally designed to support ZKPs. The EVM and opcodes are not zero-knowledge proof-friendly, making their development as an L2 scaling solution an arduous task for developers.
The method for creating a ZKP is incredibly complex, requiring the transformation of program logic into a mathematical circuit that also includes hash and smart contract operations, as well as logical operations such as "with," "or," and "not." However, a mathematical circuit consists only of simple operations such as addition and multiplication, making it very hard to emulate sophisticated algorithms with such few resources.
While validity proofs are complex and expensive (relative to Optimistic fraud proofs), verification by the L1 is simple, making them—even still—cheaper than a regular L1 transaction. However, due to the complex computation involved in the validity proofs, special-purpose hardware may be needed to run a node, creating a centralizing effect and a less open network.
zkSNARK stands for “Zero-Knowledge Succinct Non-Interactive Argument of Knowledge.” Alessandro Chiesa, a professor from UC Berkeley, co-authored a paper where the term “zkSNARK” was first used. Breaking down the acronym further:
Zk: “zero knowledge,” used for protecting user’s privacy
S: “succinct” proofs, referring to data compression, which can be verified in only a few milliseconds. This means that rather than the Ethereum mainnet validating nodes needing to verify every transaction individually, validators will just verify a small proof to ensure the validity of the transactions. Typically, proofs have a set number of group elements (consider transactions), although the actual proof size is significantly smaller.
N: Non-Interactive signifies that the prover just needs to send a single message to the verifier instead of exchanging messages back and forth. Non-interactivity is necessary because the prover may generate a single proof that can be verified by anyone, anywhere, without ever requiring information sharing with the prover
ARK: generate off-chain trusted proofs
zkSNARK is used to construct a proof that allows one party (prover) to prove that the statement is true to the other party (verifier) without revealing any information.
*A quick note about nomenclature: In the domain of zero-knowledge technology and the “ZK-rollup” field, the accepted term for the scaling solution is a “ZK-rollup.” However, nearly every ZKR on the market, with the exception of Aztec, does not actually utilize the privacy-preserving zk aspect of the SNARK or STARK. Instead, these projects utilize the proof for the primary attribute of succinctness (S). This is the portion of the SNARK/STARK that enables scalability increases. Hence, some in the ecosystem would prefer to refer to them as "validity rollups" rather than “ZK-rollups.”
Nevertheless, terms such as ZKR, zkEVM, and ZK have become widely accepted by both specialists and newcomers in the web3 sphere. Until more accurate terminology is universally agreed upon, “ZK” will continue to encompass the entire ZK/validity rollup field.
zkSTARK stands for “Zero-Knowledge Scalable Transparent Arguments of Knowledge.” Eli Ben-Sasson, Michael Riabzev, Iddo Bentov, and Yinon Horeshy published a paper in 2018 titled “Scalable, transparent, and post-quantum secure computational integrity,” where the term “STARK” was coined. STARKs are widely used by StarkEx and StarkNet, scaling solutions built by the Starkware team. zkSTARKs use cryptographic proofs and algebra to impose privacy of computations on blockchains. It allows blockchains to move the computation to an off-chain prover, and then an on-chain verifier can verify the validity of computations.
ZkSTARK offers various improvements over zkSNARK. A trusted setup is not required, so there’s no problem with malicious actors getting access to private keys. It uses a hash function for security and is quantum-resistant. As zkSTARKs are new, documentation, tools and libraries are not yet developer friendly.
zkSTARK provides a solution to the two major drawbacks of zkSNARK. zkSTARK proofs are faster and cheaper than zkSNARK. The major problem with zkSTARKs is that they have a larger proof size and take a long time to verify the proofs.
The goal of creating a zkEVM is to onboard developers and users as quickly and as easily as possible while maintaining compatibility with existing Ethereum tooling, standards, contracts and dApps. zkEVMs are a significant advancement in the field of blockchain because they allow for ZKRs to support a much broader range of applications than they currently can. The initial implementation of ZK rollups enabled basic operations on highly scalable and cost-efficient Layer 2s, such as sending ETH and transferring tokens. However, with the introduction of zkEVMs, developers can now write arbitrarily complex smart contract code and deploy it to ZK-powered Layer 2s.
zkEVMs also offer fast finality and capital efficiency, providing instant finality in transactions after being written on Ethereum, making it an effective solution for NFT traders and DeFi investors who constantly move assets between Ethereum Layers 1 and 2.
However, creating a zkEVM requires converting EVM programs into a specific format called an "algebraic circuit" so that the computations can be proven with zero-knowledge (ZK) proofs.
There are three ways to achieve this:
proving the EVM execution trace directly
creating a custom virtual machine (VM) and mapping EVM opcodes into the custom VM's opcodes
creating a custom VM and transpiling Solidity into the custom VM's bytecode.
Each approach has trade-offs between high compatibility (easy to redeploy from Layer 1) and high performance (quick to generate ZK proofs). Generally speaking, the higher the compatibility with Layer 1, the lower the performance, and vice versa. For example, with the first option, mirroring the EVM directly (high compatibility) introduces massive overhead, resulting in very slow ZKP generation (less performant).
How do zkEVMs work?
zkEVMs are composed of three distinct parts: a running (execution) environment, a proving circuit, and a verifier contract. Each of these components plays a crucial role in the operation of the zkEVM, from execution and proof generation to verification.
The Execution Environment is where smart contracts run in the zkEVM. It functions similarly to the EVM and its output is a new state derived from the initial and current transaction.
The Proving Circuit is where transactions in the execution environment are verified using zero-knowledge proofs. The circuit uses pre-state, transaction, and post-state inputs to complete the proof generation process. Through this process, the prover asserts a concise validity proof of the state transition.
ZKPs are slower than OR’s approach because of the conversion process from a classical program to a Zk-friendly format is cumbersome. This process requires the prover to translate the code into a high-level language, such as Cairo in the case of Starknet, which then compiles it into a proof. This process is both time-consuming and complex, as most computations are not natively compatible with Zk. Once the proof is in the Zk format, it can be sent to a proof system like ZK-snark or ZK-stark, which are the two most widely used Zk algorithms. However, this slow speed of ZKPs is a challenge that must be addressed in order to make them more efficient and accessible for widespread use.
The Verifier Contract is responsible for providing validity proofs to smart contracts deployed on Ethereum. The pre-states, transactions, and post-states are further committed to the verifier contract. Then, the verifier contract analyzes the provided proof through computations, ensuring that the submitted outputs are correctly generated from inputs.
Issues with zks and the EVM
Ethereum, being a blockchain platform designed prior to the integration of zero-knowledge proofs, presents a number of obstacles in terms of their implementation within the ecosystem. These limitations extend to the implementation of zero-knowledge Ethereum Virtual Machines (zkEVMs), particularly in terms of the time required for prover computation. In the case of Type-1 zkEVMs, the absence of optimization for proof generation results in prohibitively high costs.
The creation of a zkEVM represents a significant technical challenge, given the design of the Ethereum Virtual Machine (EVM) and the specific requirements of zero-knowledge proof systems. One of the key difficulties lies in the stack-based architecture of the EVM, which complicates the proof generation process. The stack is a data structure that uses the last-in, first-out principle and has a word size of 256 bits, allowing for native hashing and elliptic curve operations, critical components in ensuring the security of funds. The EVM executes a program by pushing data onto the stack, performing operations, and then popping the data off when it is no longer needed.
The use of special opcodes by the EVM, particularly for error handling and program execution, adds to the complexity of the proof generation process. The utilization of these opcodes, while necessary for handling more complex tasks and errors, can increase the difficulty of verifying the correct functioning of the EVM.
The Spectrum of “EVM Compatibility”
The math and proof system behind ZKRs is not easily compatible with Ethereum's EVM code/construction. ZKRs require arithmetic circuits to demonstrate the correctness of a ZK computation, and circuits are complex. ZKR developers are required to write low-level code to create them. Moreover, proof creation time is not scalable and can be costly.
More specifically, zkEVMs that prioritize Ethereum compatibility will be less efficient due to computational inefficiencies and longer wait times for L1 finality, but they will provide out-of-the-box functionality that requires little to no contract alterations.
In contrast, zkEVMs that prioritize performance will require more backend changes and may struggle with user adoption. While token grants and ecosystem funds may incentivize development and user adoption, they are not always effective in driving sustained adoption. Nonetheless, the slow but steady growth of Arbitrum and Optimism during the last year provides hope that these challenges can be overcome.
EVM compatibility is a metric used to measure the degree of similarity between the behavior of a given system and the Ethereum Virtual Machine (EVM). One framework for understanding this concept includes several levels of compatibility:
Language-level compatibility: Programs written in one language, such as Solidity, can be compiled or transpiled into bytecode for the target rollup and executed correctly.
Opcode-level compatibility: The rollup implements all EVM opcodes, but there may be some implementation differences.
Full EVM-equivalence: Bytecode deployed on Ethereum Layer 1 can be copied and pasted to a given Layer 2, and it runs as if it were operating on Layer 1, with no differences in behavior between the two.
zkEVM difficulty and development scale. Source
So-called "EVM-equivalent" zkEVMs, such as Scroll, are able to verify programs that run in an environment that is exactly like the normal Ethereum. These zkEVMs are compatible with the EVM at the byte-code level, which is important because it makes the developer experience virtually indistinguishable from developing on Ethereum itself. It also allows for the re-use of familiar and battle-tested Ethereum clients like geth, and it means that the zkEVM can draft on upgrades to Ethereum itself with minimal extra work needed from the project.
On the other hand, "EVM-compatible" zkEVMs are not as rigorous when it comes to harmonizing with the EVM. These zkEVMs take smart contract code written in, say, Solidity and compile it into a format that has been optimized for ZK proofs. This approach allows for code to run more efficiently, but it throws out a lot of Ethereum's existing infrastructure. The geth client, for example, is known to have certain limitations, which is why teams like zkSync have replaced it with other software written in Rust.
An EVM compatible Virtual Machine (VM) is a quick and easy way to onboard most decentralized applications (dApps) running on Ethereum to Layer 2s, but it is inherently un-generalizable and unsustainable in the long term. Achieving EVM equivalence can be argued to be the best way forward in optimizing engineering resources and staying up-to-date with developments in the EVM.
At Event Horizon Capital (EHC), we believe select cryptoassets will outperform all other asset classes over the next five, ten, and possibly even twenty years due to their superior qualities as new money/assets for the Internet of Value age. Because of this, we seek the best risk-adjusted exposure to protocols that personify the blockchain benefits outlined above. With crypto markets being one of the world’s most dynamic markets, our agile and active management provides the flexibility required for swift, decisive action while also never compromising on security.
EHC’s multi-strategy approach is built upon:
Qualitative fundamental research,
Quantitative tools and valuation metrics
Narrative and sentiment-driven market swings
This newsletter from Event Horizon Capital is intended for informational and illustrative purposes only and has been prepared to provide insights on the market. It should not be construed as an offer, solicitation, or recommendation to buy or sell any security or financial instrument, nor participate in any investment strategy. The opinions and information expressed in this newsletter are as of the date it was written and are subject to change without notice due to various factors, including changing market conditions and regulations. This newsletter is not intended as investment advice and should not be considered as such. Third-party data presented in this newsletter is sourced and deemed reliable, but no guarantee is made as to its accuracy or completeness. All investments carry risk, and there is no assurance that any specific investment, strategy, or product referenced directly or indirectly in this publication will be profitable or suitable for your portfolio.