Comment on page
The commitment tree is a data structure stored in and maintained by the Handler. It is a depth-16 quaternary merkle tree defined over the Poseidon hash function. Every Nocturne note is hashed and inserted into this merkle tree so JoinSplit proofs can use it to prove note authenticity.
We use a quaternary tree because it uses fewer constraints in our ZK circuit than the equivalent depth-32 binary tree does.
Naive insertions into to commitment tree are very expensive - in our initial prototypes, it cost us 1.3M gas to insert a single note into the tree.
We reduce this down to ~16K per insertion (~250K for 16 insertions) by batching insertions and through a technique we call "subtree updates". Between this and batch proof verification, operations in nocturne have a total overhead of just under 300K.
The main idea behind subtree updates is to update the tree using a ZKP. At a high level, it goes like this:
- 1.On-chain, each new insertion gets placed into a "queue".
- 2.Every time the queue has at least 16 elements, we remove the batch from the queue and hash it using sha256 (cheap on-chain). This hash is called the
- 3.An off-chain actor called the subtree updater generates and submits a ZKP of the following statement: given the current commitment tree root
oldRootand a claimed new root
newRoot, there exists a batch of insertions such that:
accumulatorHashis the sha256 hash of the batch
newRootis the root of the tree if the batch was inserted into the tree whose root is
- 4.The contract verifies this proof, plugging in the current tree root and the
accumulatorHashit computed in step 2. If the proof is valid, then the contract simply sets the commitment tree root to
Next, we'll cover it again, but in greater detail.
Within the Nocturne protocol, there are three events that create new notes to be inserted:
- 1.A deposit, which creates a new note in the clear with asset, value, and owner specified by the depositor
- 2.A refund, which creates a new note in the clear when there are assets remaining in the handler at the end of an operation.
- 3.A JoinSplit, which creates new notes confidentially, only revealing their note commitments (recall, is a Posedion hash of the note, not a sha256 hash of the note)
Whenever an insertion needs to take place the way it gets hashed into
accumulatorHashneeds to change because we don't want to put the whole note into storage. To deal with this, for each insertion, we define the insertion digest as follows:
- 1.If we're inserting a
note(cases 1 and 2 above), the insertion digest is
- 2.If we're inserting a
noteCommitment(case 3 above), the insertion digest is
Now we can lay out the mechanism in full. Let's start by defining the the following state variables:
root: the current root of the commitment tree
count: the number of leaves in the tree. This does not include leaves that are queued but have yet to be inserted.
bitmap: a bitmap indicating which insertion is a note and which is a note commitment
uint256array of size
Bis the batch size, chosen upon instantiation. We use a batch size of
16. This array is used to track a batch of insertion digests, which we will define below.
batchLen: a length counter used to indicate how many new insertion digests are queued up in
accumulatorQueue: a queue of accumulator hashes, which we will define below.
For each new note or note commitment to be inserted into the tree, we will compute the insertion digest, "push" it into
batch, and increment
batchLen. We will also set the corresponding bit in
1if we just inserted a note,
batchis full (i.e.
batchLen = B), we compute the batch's
accumulatorHash, which is defined as
sha256(batch || bitmap)and enqueue it into
If we set
Bto a power of the tree arity (in our case four), then every batch will fall on a subtree of the commitment tree, making the batch insertion logic simple. This is why we call it a "subtree update". Nocturne currently uses
B = 16, which is a power of four.
batch || bitmap, not just
batchto prevent a malicious subtree updater from lying about which case (note or note commitment) an insertion was, which would allow them to destroy deposits.
This is all more or less a fancy way to "queue" up batches of insertions in a manner such that we can avoid needing to store all of the individual insertions on-chain.
- 1.Constantly observe new insertions as they are enqueued on-chain.
- 2.Whenever a batch is filled, pull that batch's
accumulatorHashfrom the chain and apply the batch to a local copy of the tree.
- 3.generate a ZKP that proves the following statement: if one were to apply the batch whose hash is
accumulatorHashto a local copy of the tree with root
oldRoot, the new root would be
newRoot. The circuit for this is described in greater detail in subtree update circuit.
- 4.submit that to the chain by calling a method
subtreeUpdatewill verify the proof against the current
rootand the head of the
- 6.If the proof is valid,
subtreeUpdatecan conclude that
newRootis correct, so it will
- 1.dequeue from
- 7.If the proof is invalid,
In this scheme, we never have to do any Poseidon hashing on-chain. While sha256 in-circuit is too expensive for a client, the Subtree Updater role can be carried out by a powerful-ish server - as long as someone is updating the tree, then the system will continue processing transactions.
In practice, the subtree update prover isn't that expensive. It only takes ~6s with rapidsnark on an Ubuntu desktop with an 16-core AMD Ryzen 9 5950X and 64GB of RAM.
The gas cost per insertion is now the cost to verify the subtree update proof split
Bways, plus the gas cost of a few more (warm) storage reads/writes for the