Skip to content

BankForks - SharableBank for root #7300

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Aug 5, 2025

Conversation

apfitzge
Copy link

@apfitzge apfitzge commented Aug 4, 2025

Problem

  • Many services lock BankForks just to get the root bank
    • some use the RootBankCache, which reduces contention but doesn't eliminate when root changes

Summary of Changes

Fixes #

@codecov-commenter
Copy link

Codecov Report

❌ Patch coverage is 95.94595% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.8%. Comparing base (1c9366c) to head (1d5e7c5).
⚠️ Report is 7 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff            @@
##           master    #7300     +/-   ##
=========================================
- Coverage    82.8%    82.8%   -0.1%     
=========================================
  Files         801      800      -1     
  Lines      363392   363365     -27     
=========================================
- Hits       300960   300934     -26     
+ Misses      62432    62431      -1     
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@apfitzge apfitzge marked this pull request as ready for review August 4, 2025 19:49
@akhi3030
Copy link

akhi3030 commented Aug 4, 2025

FWIW, I was also looking into addressing a similar issue in anza-xyz/alpenglow#322.

Copy link

@AshwinSekar AshwinSekar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, root bank is only written by one thread p infrequently (400 ms), but has many readers. Seems arcswap will be plenty performant here.

Would wait to see if Brennan/Akhi have any objections

Copy link

@akhi3030 akhi3030 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like a fine strategy.

I am curious if in practice we can notice any performance difference between this strategy and the simpler one of:

#[derive(Clone)]
pub struct SharableBank(Arc<parking_lot::RwLock<Bank>>);

impl SharableBank {
    pub fn load(&self) -> Arc<Bank> {
        self.0.read().clone()
    }

    fn store(&self, bank: Arc<Bank>) {
        *self.0.write() = bank;
    }
}

Especially because the data being swapped is inside an Arc so cheap to copy.

We could consider doing some benchmarking but I don't think we need to waste time on such micro optimisations at this point in time.

@apfitzge
Copy link
Author

apfitzge commented Aug 5, 2025

Seems like a fine strategy.

I am curious if in practice we can notice any performance difference between this strategy and the simpler one of:

#[derive(Clone)]
pub struct SharableBank(Arc<parking_lot::RwLock<Bank>>);

impl SharableBank {
    pub fn load(&self) -> Arc<Bank> {
        self.0.read().clone()
    }

    fn store(&self, bank: Arc<Bank>) {
        *self.0.write() = bank;
    }
}

Especially because the data being swapped is inside an Arc so cheap to copy.

We could consider doing some benchmarking but I don't think we need to waste time on such micro optimisations at this point in time.

Practically I'm not sure we'll see much difference since the lock-time would be so small. with rwlock there's still waiting for lock to be freed, possibly a syscall? i didn't look at the code of parking_lot.

@apfitzge apfitzge merged commit 5d1651c into anza-xyz:master Aug 5, 2025
52 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants