Skip to content

Micro-benchmark Results for version 2.0 #120

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Christoph-Denzel opened this issue May 17, 2025 · 2 comments
Open

Micro-benchmark Results for version 2.0 #120

Christoph-Denzel opened this issue May 17, 2025 · 2 comments

Comments

@Christoph-Denzel
Copy link

Hello, I just wanted to note that this is a non-issue.
I just did not know where else to put it, since there is no discussion tab.

In the paper from 2014, there was a reported maximum throughput of roughly 80,000 requests for a 0/0 micro-benchmark.
I did some performance tests on more powerful machines and a fast network, but in general, the performance was worse than reported.
I did achieve roughly 40 to 50k requests per second.
The difference, I think, is latency, where even for a single client, it takes 4ms to receive an answer, even after some warm-up time.
This is a lot higher than the reported 1ms from back then.
The round-trip time for the network is 0.2 - 0.3ms using ICMP pings.

As far as I understand, the biggest difference is the use of TLS for replica communication and signatures for client communication.
Has something else changed since then that could have impacted performance?

Did you make any new measurements with version 2.0, and if so, could you report them?

Thanks in advance :)

@rvassantlal
Copy link
Contributor

Hello,

Can you let me know how many clients you experimented with?

The last measurements that I have are for f=1 and requests 1024/0. I got throughput around 16.5k req/s, which is consistent with the paper.

I will run some tests for 0/0 and get back to you.

@Christoph-Denzel
Copy link
Author

Thanks,

these are my results for 0/0 and 4kb/4kb.

Image

I couldn't get more clients to work, since the system detected replay leader attacks or threw other errors and got stuck by repeating this message.
-- Last regency: 0, next regency: 1
-- Re-transmitting STOP message to install regency 1

I already increased the system.totalordermulticast.timeout to 60000.
Fair batching is used, if that makes a difference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants