Skip to content

Can we make consensus on a batch of messages in microbenchmarks? #119

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
chenweiyj opened this issue May 14, 2025 · 5 comments
Open

Can we make consensus on a batch of messages in microbenchmarks? #119

chenweiyj opened this issue May 14, 2025 · 5 comments

Comments

@chenweiyj
Copy link

chenweiyj commented May 14, 2025

When I run the microbenchmarks demo, I have found that the consensus is always reached for only one client request instead of packaging a batch of such requests even if the maxbatchsize and samebatchsize parameters are set to 1024 and false respectively in system.config. For supporting this phenomenon, I print the requests.length in public void receiveMessages(int consId[], int regencies[], int leaders[], CertifiedDecision[] cDecs, TOMMessage[][] requests) of ServiceReplica, and the console result is always 1. Similarly, commands.length always equals to 1 in appExecuteBatch of ThroughputLatencyServer.

So I wonder how to make consensus on a batch of messages instead of one in microbenchmarks.

@rvassantlal
Copy link
Contributor

samebatchsize is a boolean parameter. Even if it is false, replicas should deliver a batch of requests to the application if the ordering was done on a batch of requests.

What is the size of the requests that clients are sending? It should be less than half set in system.totalordermulticast.maxBatchSizeInBytes.

How many clients are you running? It might not be enough if you try with 2 or 3. Can you try with more clients?

@chenweiyj
Copy link
Author

I use one client by ./smartrun.sh bftsmart.demo.microbenchmarks.ThroughputLatencyClient 1001 1 2000 1024 0 false true nosig, and I have found this client will wait for the response each time it sends a request. The system.totalordermulticast.maxBatchSizeInByte is the default value of 1000000 in system.config.

Should I use AsyncLatencyClient?

@rvassantlal
Copy link
Contributor

You have to increase the number of clients from 1 to, for example, 20:
./smartrun.sh bftsmart.demo.microbenchmarks.ThroughputLatencyClient 1001 20 2000 1024 0 false true nosig

This way, you will have 20 clients sending requests. As you noticed, each client generated by ThroughputLatencyClient will send a request and wait for the response before sending another request. That is why, when executing ThroughputLatencyClient with a single client, you saw batches of size 1.

AsyncLatencyClient allows a single client to send multiple requests without waiting for a response. However, I recommend first trying ThroughputLatencyClient with more than one client, e.g., try with 20.

@chenweiyj
Copy link
Author

So, if I want the server acts like a blockchain node which packages transactions into a block with maximum transaction size = 1024 by setting maxbatchsize=1024 and batchtimeout=1000, indicating blocks (or batches in bft-smart) are created every 1 second, I should try ThroughputLatencyClient with 1024 clients. Right?

@rvassantlal
Copy link
Contributor

If you want to "create blocks" every 1 second, you can set batchtimeout=1000. However, there is no guarantee that with only 1024 clients you will have a whole batch because not all clients may send requests within 1 second.

Moreover, BFT-SMaRt, as it is, is not adequate for use as a blockchain. I recommend reading the following paper:
From Byzantine Replication to Blockchain: Consensus is only the Beginning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants