Skip to content

chore(lib): validate inputs before locking #2148

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Oct 14, 2024
Merged

Conversation

chmllr
Copy link
Contributor

@chmllr chmllr commented Oct 11, 2024

The PR #2140 introduced a locking around a cache. Some parts of the critical section seem unrelated to the locking and can be done before without changing the read semantics (because this check happened after the cache read before).

issue: none

@chmllr
Copy link
Contributor Author

chmllr commented Oct 11, 2024

@kevinhalliday I have another question to the original PR: how critical are lock contentions on GetAddresses? The original PR uses a simple mutex, while we could also use an RW-lock and separate cache reads and cache updates so that at least concurrent reads (which will be the most frequent accesses) are not blocking each other.

@corverroos
Copy link
Collaborator

corverroos commented Oct 12, 2024

@chmllr RWMutex should only be used for performance optimisation, I would not worry about that in that case. Performance is a non-issue. Lets keep things simple.

I would even argue that time spent on this PR by multiple people isn't worth the effort. The value it adds is insignificant and it feels like we are making the code harder to read.

A simple method with lock-defer-unlock as first line is common and standard and doesn't need much thinking. As soon as you move the lock around, readers of the code need to think why is this done here and not there, etc.

I wouldn't worry about these type of optimisations.

@chmllr
Copy link
Contributor Author

chmllr commented Oct 14, 2024

@chmllr RWMutex should only be used for performance optimisation, I would not worry about that in that case. Performance is a non-issue. Lets keep things simple.

I would even argue that time spent on this PR by multiple people isn't worth the effort. The value it adds is insignificant and it feels like we are making the code harder to read.

A simple method with lock-defer-unlock as first line is common and standard and doesn't need much thinking. As soon as you move the lock around, readers of the code need to think why is this done here and not there, etc.

I wouldn't worry about these type of optimisations.

Thanks Corver! Fully agree with optimizing for simplicity first, that's why I asked how critical this contention is. Because the fact that we have a cache means we do have a lot of reads. So if they all wait for each other now, might not be what we want and make the whole situation worse than without any cache in the first place... But here I fully trust your intuition 👍

@chmllr chmllr merged commit a722aa0 into main Oct 14, 2024
19 checks passed
@chmllr chmllr deleted the chmllr/optimize-locking branch October 14, 2024 07:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants