You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The worker stays online and accepts client queries
The worker returns valid responses
The worker reports valid metrics to the coordinator
Both check are performed by the centralised service (coordinator). The system should have the following guarantees:
The worker must store the requested chunks (Non-byzantinenness)
The worker should be incentivised to accept user queries (Liveness)
The worker should be highly disincentivized to report a wrong query result (Validity)
The proposed solution is to let the coordinator to collect the signed query logs from each worker (see #39). Then the coordinator should randomly select queries from the query log and submit it to the other workers having the required chunk and compare the results in a consensus fashion. The random selection guarantees that the overhead of re-running the queries is controlled. For example if each query is picked with the probability 0.1 the overhead is on average 20% (since each query is sent to two more workers to have a non-trivial consensus). In the future, the verification mechanism can be replaced by an SGX runner that will further reduce the costs of verification.
Assumptions
The replication factor is at least 3
Each worker is reachable from the coordinator. If the coordinator does not receive a response from the worker, it is considered offline.
Non-byzantinenness
Even though the workers can identity that the test queries are originated from the coordinator, the randomness guarantees that the workers cannot predict in advance which chunk is going to be tested. This means that the worker has to keep all the chunks it is expected to keep as assigned by the coordinator. If the worker simply drops the coordinator request, it is counted as the liveness penalty. Thus, in order to be able to respond to the coordinator request the worker must keep all the chunks.
Liveness
Let's assume that the worker deliberately drops a client request. The it can either can't serve it due to the lack of the required chunk (forced drop) or it in fact can process the request. In the former case, the client will pick another worker (we assume that there must at least one active worker with the chunk needed to resolve the client request). The query will be then processed, and with significant probability the coordinator will send the verification query to the worker thus catching the fact it misses the required chunk.
If the worker does have capacity to handle the request, it is incentivized so since the worker payout will depend on the total fees processed by the worker (together with the delegated stake). Therefore processing a client request is always economically preferrable.
Validity
The query validity is optimistic since the worker commits to the query results when submitting it to the coordinator. Committing to an invalid query has a disproportianal risk to fail the probabilistic verification that follows.
The text was updated successfully, but these errors were encountered:
Worker validation
The network has to check that each epoch
Both check are performed by the centralised service (coordinator). The system should have the following guarantees:
The proposed solution is to let the coordinator to collect the signed query logs from each worker (see #39). Then the coordinator should randomly select queries from the query log and submit it to the other workers having the required chunk and compare the results in a consensus fashion. The random selection guarantees that the overhead of re-running the queries is controlled. For example if each query is picked with the probability 0.1 the overhead is on average 20% (since each query is sent to two more workers to have a non-trivial consensus). In the future, the verification mechanism can be replaced by an SGX runner that will further reduce the costs of verification.
Assumptions
Non-byzantinenness
Even though the workers can identity that the test queries are originated from the coordinator, the randomness guarantees that the workers cannot predict in advance which chunk is going to be tested. This means that the worker has to keep all the chunks it is expected to keep as assigned by the coordinator. If the worker simply drops the coordinator request, it is counted as the liveness penalty. Thus, in order to be able to respond to the coordinator request the worker must keep all the chunks.
Liveness
Let's assume that the worker deliberately drops a client request. The it can either can't serve it due to the lack of the required chunk (forced drop) or it in fact can process the request. In the former case, the client will pick another worker (we assume that there must at least one active worker with the chunk needed to resolve the client request). The query will be then processed, and with significant probability the coordinator will send the verification query to the worker thus catching the fact it misses the required chunk.
If the worker does have capacity to handle the request, it is incentivized so since the worker payout will depend on the total fees processed by the worker (together with the delegated stake). Therefore processing a client request is always economically preferrable.
Validity
The query validity is optimistic since the worker commits to the query results when submitting it to the coordinator. Committing to an invalid query has a disproportianal risk to fail the probabilistic verification that follows.
The text was updated successfully, but these errors were encountered: