Commitment Tree
The commitment tree is a data structure stored in and maintained by the Handler. It is a depth-16 quaternary merkle tree defined over the Poseidon hash function. Every Nocturne note is hashed and inserted into this merkle tree so JoinSplit proofs can use it to prove note authenticity.
We use a quaternary tree because it uses fewer constraints in our ZK circuit than the equivalent depth-32 binary tree does.
"ZK" Subtree Update Insertions
Naive insertions into to commitment tree are very expensive - in our initial prototypes, it cost us 1.3M gas to insert a single note into the tree.
We reduce this down to ~16K per insertion (~250K for 16 insertions) by batching insertions and through a technique we call "subtree updates". Between this and batch proof verification, operations in nocturne have a total overhead of just under 300K.
The main idea behind subtree updates is to update the tree using a ZKP. At a high level, it goes like this:
On-chain, each new insertion gets placed into a "queue".
Every time the queue has at least 16 elements, we remove the batch from the queue and hash it using sha256 (cheap on-chain). This hash is called the
accumulatorHash
.An off-chain actor called the subtree updater generates and submits a ZKP of the following statement: given the current commitment tree root
oldRoot
and a claimed new rootnewRoot
, there exists a batch of insertions such that:accumulatorHash
is the sha256 hash of the batchnewRoot
is the root of the tree if the batch was inserted into the tree whose root isoldRoot
The contract verifies this proof, plugging in the current tree root and the
accumulatorHash
it computed in step 2. If the proof is valid, then the contract simply sets the commitment tree root tonewRoot
.
Next, we'll cover it again, but in greater detail.
Subtree Updates in Detail
Within the Nocturne protocol, there are three events that create new notes to be inserted:
A deposit, which creates a new note in the clear with asset, value, and owner specified by the depositor
A refund, which creates a new note in the clear when there are assets remaining in the handler at the end of an operation.
A JoinSplit, which creates new notes confidentially, only revealing their note commitments (recall, is a Posedion hash of the note, not a sha256 hash of the note)
Whenever an insertion needs to take place the way it gets hashed into accumulatorHash
needs to change because we don't want to put the whole note into storage. To deal with this, for each insertion, we define the insertion digest as follows:
If we're inserting a
note
(cases 1 and 2 above), the insertion digest issha256(note)
.If we're inserting a
noteCommitment
(case 3 above), the insertion digest isnoteCommitment
.
Now we can lay out the mechanism in full. Let's start by defining the the following state variables:
root
: the current root of the commitment treecount
: the number of leaves in the tree. This does not include leaves that are queued but have yet to be inserted.bitmap
: a bitmap indicating which insertion is a note and which is a note commitmentbatch
: auint256
array of sizeB
, whereB
is the batch size, chosen upon instantiation. We use a batch size of16
. This array is used to track a batch of insertion digests, which we will define below.batchLen
: a length counter used to indicate how many new insertion digests are queued up inbatch
.accumulatorQueue
: a queue of accumulator hashes, which we will define below.
For each new note or note commitment to be inserted into the tree, we will compute the insertion digest, "push" it into batch
, and increment batchLen
. We will also set the corresponding bit in bitmap
to 1
if we just inserted a note, 0
otherwise.
When batch
is full (i.e. batchLen = B
), we compute the batch's accumulatorHash
, which is defined as sha256(batch || bitmap)
and enqueue it into accumulatorQueue
.
If we set
B
to a power of the tree arity (in our case four), then every batch will fall on a subtree of the commitment tree, making the batch insertion logic simple. This is why we call it a "subtree update". Nocturne currently usesB = 16
, which is a power of four.
We hash
batch || bitmap
, not justbatch
to prevent a malicious subtree updater from lying about which case (note or note commitment) an insertion was, which would allow them to destroy deposits.
This is all more or less a fancy way to "queue" up batches of insertions in a manner such that we can avoid needing to store all of the individual insertions on-chain.
Then, an permissionless off-chain actor called the subtree updater will do the following to update the tree:
Constantly observe new insertions as they are enqueued on-chain.
Whenever a batch is filled, pull that batch's
accumulatorHash
from the chain and apply the batch to a local copy of the tree.generate a ZKP that proves the following statement: if one were to apply the batch whose hash is
accumulatorHash
to a local copy of the tree with rootoldRoot
, the new root would benewRoot
. The circuit for this is described in greater detail in subtree update circuit.submit that to the chain by calling a method
subtreeUpdate(proof, newRoot)
subtreeUpdate
will verify the proof against the currentroot
and the head of theaccumulatorQueue
.If the proof is valid,
subtreeUpdate
can conclude thatnewRoot
is correct, so it willdequeue from
accumulatorQueue
set
root
tonewRoot
If the proof is invalid,
subtreeUpdate
will revert.
In this scheme, we never have to do any Poseidon hashing on-chain. While sha256 in-circuit is too expensive for a client, the Subtree Updater role can be carried out by a powerful-ish server - as long as someone is updating the tree, then the system will continue processing transactions.
In practice, the subtree update prover isn't that expensive. It only takes ~6s with rapidsnark on an Ubuntu desktop with an 16-core AMD Ryzen 9 5950X and 64GB of RAM.
The gas cost per insertion is now the cost to verify the subtree update proof split B
ways, plus the gas cost of a few more (warm) storage reads/writes for the batch
array and accumulatorQueue
.
Last updated