You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The router assembles query execution receipts and builds a merkle tree by sorting the client addresses lexicographically, and for each address sorting the receipts by the query nonce.
The root of the resulting merkle tree is submitted by the router, together with the withdrawal data for each client. The withdrawal amount is calculated based on the query processed and retrieved data.
The remaining task is to provide a proof that the router knows the data with the signatures, proving that the commitment is sound.
ZKP -- potentially the best scenario but requires more R&D
Commitment reveal -- after the commitment, the processor has to reveal a path to a pseudo-randomly selected leaf of the merkle tree. The contract will be able to verify the signatures and that the merkle tree path satisfies the root commitment
The router submits batched query execution logs on-chain, including the query signatures, query response metrics (scanned data, result set size)
The contract verifies the signatures, calculates the fees to be extracted and deducts from the client deposits
What are the limitations for the size of the processed queries?
Should we consider going after a dedicated rollup?
The text was updated successfully, but these errors were encountered: