Public VM Architecture
We need to ensure that the execution environment for public functions has the following properties:
- The Prover is fairly compensated for their work (gas metering)
- A dishonest Prover cannot convince the kernel circuit that a public function failed, when an honest Prover would claim it succeeded
- A public function cannot produce unsatisfiable constraints that would prevent the creation of a valid block proof
- The published verification key for a public function is accurately represented by the opcode stream Provers will use to construct a function proof
Why we need a VM
Point 2 requires a Virtual Machine architecture. The older model of having ACIR directly produce a circuit + verification key is flawed. For any witness that is not a public input, a dishonest Prover could assign an incorrect value and create a failing proof.
The kernel circuit cannot prevent this by requiring the public function proof to succeed, as it is trivial for a malicious contract writer to create unsatisfiable programs (e.g.
assert(true == false)).
We cannot require a Prover/Sequencer simulates the transaction to determine whether it is possible to include it in the block proof, as this amounts to unpaid compute that can be used as a DoS attack vector.
How a VM solves DoS attacks + dishonest Prover/contract writer attacks.
Every VM opcode operates on some input state and executes 1 of 2 functions:
- A pure function that produces output state from input state
- A function that produces no output state and applies assertion checks on input state
VM simulator asserts that functions of type 1 always succeed.
VM simulator allows functions of type 2 to fail while still producing a satisfiable VM proof. (i.e. the VM circuit contains a
failed public input flag, that will be set to
true if any assertion checks fail.)
The protocol mandates the VM proof must be valid, ensuring that functions of type 1 are correctly evaluated by the Prover.
The VM architecture must finally ensure that all witness values are either public inputs or derived from public inputs. This implies that failing assertion checks are due to bad inputs and not a malicious Prover.
Gas metering the VM
Public functions have the following L2 costs associated with them:
- The compute cost of generating a VM proof for a given program
- The compute cost of generating a public kernel snark proof
- The latency cost of reading L2 state
- The storage cost of writing L2 state
The following L1 costs also apply:
- Data cost of broadcasting L2 state writes to L1
Under a VM model, the VM simulator can perform opcode-by-opcode gas metering.
(need to define VM arch for more detail)
Like private functions, public functions do not make state updates; they make state update requests that are kicked out as public inputs.
This ensures that all VM witnesses are derived from public inputs (Merkle hash paths have lots of aux data not contained in public inputs)
It also makes gas metering easier as all L1 metering happens in the Kernel circuit, not the VM simulator circuit
The public kernel circuit does not directly verify a public function proof.
It verifies a verifier circuit that verifies a public function proof!
Why? Modularity, ease of development, backwards compatibility support.
Proceed with following development phases:
Phase 0: Full Proverless
There are no proofs. The rollup verifier smart contract returns
Phase 1: Kernel Proverless
Rollup proofs are generated, but public kernel snark verification logic is absent: kernel verifier returns
Phase 2: VM Proverless
Kernel proof is generated, but public function verifier circuit always returns
Phase 3: Honest prover assumption
The verifier circuit verifies a circuit whose verification key is directly computed from ACIR (like the private functions)
In theory we could go into production with this model, but without full decentralization.
Phase 4: Full VM
The verifier circuit verifies that a VM program has been successfully executed.
Additional benefit of having a VM verifier circuit is that we can upgrade the VM while supporting backwards compatibility (Kernel circuit picks one of multiple VM verification keys depending on the version used when contract was deployed).
Also means we can adopt completely different VM architectures without changing existing circuits (e.g. eWasm?)
Draft Aztec VM Architecture
This VM arch design makes the following assumptions:
- We want the public function developer experience to track private function developer experience as closely as possible
- Public function efficiency is not a protocol bottleneck (i.e. if pub fns are similar to private fns, and private fns proofs can be computed on consumer hardware, we can afford a significant prover slowdown factor when making public fn proofs).
On the tradeoff space between public function prover efficiency and reduced protocol complexity, I think it's better to bias towards reduced protocol complexity.
Given that private functions compile to ACIR opcodes, it is natural to define a VM architecture that directly simulates ACIR opcodes.
Consider a crypto backend that produces circuit constraints from ACIR opcodes. Minimal complexity solution requires a VM architecture that will execute identical constraints within the context of a VM simulator.
i.e. we can repurpose our existing crypto backend and ACIR constraint generation modules to build the VM.
VM architecture overview
ACIR opcodes do not directly map to constraints; they map to subroutines composed of constraints (however the subroutine size can be 1 for a simple opcode e.g. ADD)
The subroutine constraints for a given opcode is described by microcode.
A program instruction is a tuple of:
- instruction value
- instruction witness context
- instruction selector context
The instruction value is represents a combination of opcode value and microcode value.
The witness context describes up to four witness indices that are used to look up the witness values used in the microcode constraint.
The selector context describes custom arithmetic selector values, when the values are not derived directly from the opcode/microcode (e.g. mul gates, add gates, linear combination gates)
An instruction value is used to look up the UltraPlonk constraint selectors (from a fixed lookup table that is shared by all VM programs) required to evaluate a microcode constraint.
The microcode constraint selector values are looked up from opcode/microcode instruction table. For arithmetic selectors, the program-specific selector context values are added in.
We assume that if opcode selector context is nonzero, relevant selector values in the microcode lookup table are zero. Vice-versa applies. This can be validated when publishing a contract.
Selector contexts cannot fully define an arithmetic constraint. There must be 1 degree of freedom remaining to prevent VM programs from producing unsatisfiable constraints (e.g. ). TODO: how do we enforce this? We might need to transpile some ACIR opcodes that run into this issue into sequences of simpler ones that don't.
VM program columns
|Program counter||Instruction value||selector context||witness indices (4 columns)||selector values||witness values (4 columns)||Error flag||Gas counter|
Prover-computed lookup tables:
|Maps witness index to witness value (4 columns)|
Program-specific precomputed lookup tables:
|Maps program counter value to tuple of |
The commitments form the program-specific components of the VM verification key.
VM-specific precomputed lookup tables
|Maps instruction value to selector values|
|Defines gas value per instruction|
VM Verifier Logic
New arithmetic checks are required in the verifier algorithm.
- (if we add JUMP/JUMPI instructions this logic will change)
Standard UltraPlonk arithmetic checks are applied to and , with some changes to the standard arithmetic constraint to accommodate for the error flag
e.g. For some assertion check , we do not require but instead:
- Check is boolean
If let , else
In the VM model, what used to be precomputed selectors are now part of the Prover's witness commitment.
The only selectors where this does not apply are:
- Existing plookup selectors
- Permutation selectors
The original Plonk permutation argument is no longer required as all witnesses are looked up via plookup tables.
Additional grand product arguments are required for the multiple new lookups
Set equivalence checks
Current Composers rely on set equivalence checks to perform range checks, RAM read/writes and ROM reads.
This isn't possible in the VM model as set equivalence checks require additional constraints that depend on the size of the set. i.e. if two programs require different numbers of range checks (or different range sizes) they will not produce two uniform circuits.
In the VM arch we need to find another way to evaluate range checks, ROM and RAM. Will leave as a TODO for now, doubt it's too tricky. Range checks also fall under the assertion category i.e. failing a range check doesn't create an unsatisfiable circuit; the proof is valid but the output error flag is set to .
Why this is a public-function-only abstraction.
The following UltraPlonk/Honk selectors are now witness commitments! (The columns)
Additional witness commitments required that are not part of UltraPlonk/Honk:
In addition, selector context polynomials will likely represent least 4 selectors in order to handle ACIR's linear combination instruction
That's 28 additional witness polynomial commitments on top of the usual Honk commitments.
The additional 4 plookup arguments will also likely require at least 2 additional grand product + sorted list commitments, pushing the number of extra commitments to 32.
To contrast, we are expecting Honk to have 6 Prover commitments:
(Z = combined grand product of permutation argument and plookup argument) (S = sorted list for plookup argument)
Plus 2 opening proofs for KZG.
i.e. the overall number of Prover multi-exponentiations has grown from to
TLDR: a VM simulator circuit of an ACIR program is approx. 6 times more expensive to prove than a proof of a natively-compiled ACIR circuit.
We can afford to pay this for public functions, but this is far too great an overhead for locally-generated private function proofs.
Building the VM simulator circuit
The plan is that the existing crypto backend can do the following by re-using the Plonk standard library with a custom composer:
- build the VM circuit lookup tables
- build VM verification keys for a given program and generate program-specific VM proofs
i.e. stdlib code is re-used and only composer code changes.
Computing using the stdlib
A program is written that executes every ACIR opcode in sequence.
Composer will track an 'instruction counter'. If the opcode makes a standard library call, the stdlib function is executed as normal, but the Composer will do the following:
- Whenever a constraint is added, selector values are written into at the current
- Default value written into . We'll need to pass special context booleans into a stdlib function if it is performing a state read/write in order to add special values into
Computing VM circuits + proofs using the stdlib
In this context, the Composer already posesses .
Consider the execution of an ACIR opcode stream. If the opcode makes a standard library call, the stdlib function is executed as normal, but the Composer will do the following when a constraint is added:
- Current program counter value added into
- Composer validates the selector values are present in and looks up the instruction value
- instruction value added to
- If not already present, program counter/instruction value mapping is written into , along with required witness indices
- Witness index values are written into
- Depending on context, some constraints may write values into (e.g.basic arithmetic gates)
- Gas values looked up from and updated
- Similarly updated if assertion fails
We assume the circuit will evaluate an opcode stream of a fixed size. If the opcode stream is smaller than this limit, the remaining instructions are evaluated as NOP instructions
The Composer will be able to produce the program-specific components of the verification key as well as the Prover-specific commitments for a proof given the public inputs used.
Validating VM programs
When adding a contract to Aztec, we want to validate that the commitments are commitments to a published opcode stream.
We can do this efficiently in a custom circuit.
- Compute Fiat-Shamir challenges by hashing
- Evaluate via KZG, using to create linear combination of all commitments within
- Iterate over the opcode stream and manually compute in linear-time using field operations. Validate this matches the evaluation of the commitments
In addition, we can perform checks on each opcode to validate conditions on selector contexts within and ensure unsatisfiable constraints are not being generated.
No doubt many, but first one on my mind is:
Q: Do we want to support JUMP/JUMPI instructions for public ACIR functions? Would be very nice to have but creates discontinuity between public/private functions.
Keep up with the latest discussion and join the conversation in the Aztec forum.