-
Notifications
You must be signed in to change notification settings - Fork 61
[#139] events with bitset #167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[#139] events with bitset #167
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #167 +/- ##
==========================================
+ Coverage 78.00% 78.22% +0.21%
==========================================
Files 179 181 +2
Lines 19363 19749 +386
==========================================
+ Hits 15105 15448 +343
- Misses 4258 4301 +43
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First round done. The seven new files will be done tomorrow.
9e4c6bf
to
d024736
Compare
d024736
to
66d1d78
Compare
d59a23d
to
89be51c
Compare
b426ca7
to
66d1d78
Compare
…to massively increase performance
…e smaller buffer on aarch64
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly nitpicking 😅
@@ -489,7 +496,7 @@ pub mod details { | |||
ListenerCreateError, | |||
> { | |||
let msg = "Failed to create Listener"; | |||
let id_tracker_capacity = self.trigger_id_max.as_u64() as usize; | |||
let id_tracker_capacity = self.trigger_id_max.as_u64() as usize + 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the +1 here and teh -1 in trigger_id_max
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realized that I assumed that trigger_id_max
is the greatest trigger id I can trigger but actually it was the smallest trigger id I can no longer trigger. So it was counter intuitive and I used it wrong right from the beginning.
To support all ids up to - including - trigger id max, the bitset requires a capacity of + 1 and the trigger id max on the other side is then capacity - 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So it acts kind of like an INVALID_TRIGGER_ID
?
let now = Instant::now(); | ||
let result = sut_listener.timed_wait(TIMEOUT).unwrap(); | ||
|
||
if result.is_some() { | ||
assert_that!(result, eq Some(trigger_id)); | ||
} else { | ||
assert_that!(now.elapsed(), time_at_least TIMEOUT ); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fail to see why there is an if condition? Shouldn't this be deterministic and always have either a result with a trigger ID or a timeout?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh no no my good friend. Here is where the concept behavior diverges a bit (maybe it is a grave mistake).
In a bitset, you can trigger every id only precisely once. When you trigger it twice, and you wait, you get only one result and the next wait is blocking.
But when the underlying construct is a queue, like in process local, you can pile up the same trigger multiple times and get triggered over and over with the same id.
This could be avoided by having somehow a queue where every element can be stored only exactly once.
At the moment, it looks like we have to support both, the bitset (limited id range but not DoS) and the queue (unlimited id range but DoS possibility).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't that be handled a layer further down and the concept behave the same?
In case a queue is used, the timed_wait
could drain the queue and store the id's locally. Nothing for this PR but maybe a follow up for v0.4
std::thread::sleep(std::time::Duration::from_millis(100)); | ||
let start = Instant::now(); | ||
barrier.wait(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
std::thread::sleep(std::time::Duration::from_millis(100)); | |
let start = Instant::now(); | |
barrier.wait(); | |
barrier.wait(); | |
let start = Instant::now(); |
Why not just this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since it is possible that the OS does not schedule you anymore after the wait call and then you start measuring too late and the numbers are in your favor. So I wanted to be a bit more pessimistic here.
I do the same thing in the publish-subscribe benchmark
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case two barriers can be used to get rid of the sleep and make it more deterministic ... just a suggestion, not action required
…mic wait seems to be not compatible with shared memory and multiple memory mapping; maybe multiple addresses for the same atomic confuses someone
Notes for Reviewer
Pre-Review Checklist for the PR Author
SPDX-License-Identifier: Apache-2.0 OR MIT
iox2-123-introduce-posix-ipc-example
)[#123] Add posix ipc example
)task-list-completed
)Checklist for the PR Reviewer
Post-review Checklist for the PR Author
References
Relates to #139