-
Notifications
You must be signed in to change notification settings - Fork 209
Can we make consensus on a batch of messages in microbenchmarks? #119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
What is the size of the requests that clients are sending? It should be less than half set in How many clients are you running? It might not be enough if you try with 2 or 3. Can you try with more clients? |
I use one client by Should I use |
You have to increase the number of clients from 1 to, for example, 20: This way, you will have 20 clients sending requests. As you noticed, each client generated by ThroughputLatencyClient will send a request and wait for the response before sending another request. That is why, when executing ThroughputLatencyClient with a single client, you saw batches of size 1. AsyncLatencyClient allows a single client to send multiple requests without waiting for a response. However, I recommend first trying ThroughputLatencyClient with more than one client, e.g., try with 20. |
So, if I want the server acts like a blockchain node which packages transactions into a block with maximum transaction size = 1024 by setting |
If you want to "create blocks" every 1 second, you can set batchtimeout=1000. However, there is no guarantee that with only 1024 clients you will have a whole batch because not all clients may send requests within 1 second. Moreover, BFT-SMaRt, as it is, is not adequate for use as a blockchain. I recommend reading the following paper: |
When I run the microbenchmarks demo, I have found that the consensus is always reached for only one client request instead of packaging a batch of such requests even if the
maxbatchsize
andsamebatchsize
parameters are set to1024
andfalse
respectively insystem.config
. For supporting this phenomenon, I print therequests.length
inpublic void receiveMessages(int consId[], int regencies[], int leaders[], CertifiedDecision[] cDecs, TOMMessage[][] requests)
ofServiceReplica
, and the console result is always1
. Similarly,commands.length
always equals to 1 inappExecuteBatch
ofThroughputLatencyServer
.So I wonder how to make consensus on a batch of messages instead of one in microbenchmarks.
The text was updated successfully, but these errors were encountered: