Skip to content

Mark primary node as alive immediately if reachable and failover is not possible #1927

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: unstable
Choose a base branch
from

Conversation

hpatro
Copy link
Collaborator

@hpatro hpatro commented Apr 7, 2025

Mark primary node as alive immediately if reachable and failover is not possible

Added test case for failover explicity disabled via cluster-replica-no-failover. Currently, we wait for cluster_node_timeout * 2 period to mark a failed primary as alive if we are able to communicate with it. For scenario, where a failover won't get triggered, we can mark it as immediately available for better availability.

Before

[ok]: no failover - verify replica is not promoted if failover has been disabled (6006 ms)
[ok]: no failover - primary is in failed state (123 ms)
[ok]: no failover - cluster is in healthy state (10138 ms)

After

[ok]: no failover - verify replica is not promoted if failover has been disabled (5863 ms)
[ok]: no failover - primary is in failed state (120 ms)
[ok]: no failover - cluster is in healthy state (1 ms)

The last test no failover - cluster is in healthy state showcases the cluster state reached to ok after 1ms (with this change) compared to 10ms (unstable) where cluster node timeout is set to 5 ms.

@hpatro hpatro added the cluster label Apr 7, 2025
Copy link

codecov bot commented Apr 7, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 71.06%. Comparing base (204097d) to head (a26e140).
Report is 6 commits behind head on unstable.

Additional details and impacted files
@@             Coverage Diff              @@
##           unstable    #1927      +/-   ##
============================================
- Coverage     71.07%   71.06%   -0.02%     
============================================
  Files           123      123              
  Lines         65683    65778      +95     
============================================
+ Hits          46687    46743      +56     
- Misses        18996    19035      +39     
Files with missing lines Coverage Δ
src/cluster_legacy.c 86.41% <100.00%> (+0.32%) ⬆️

... and 17 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@@ -2146,12 +2146,21 @@ void clearNodeFailureIfNeeded(clusterNode *node) {
clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE | CLUSTER_TODO_SAVE_CONFIG);
}

/* If any of the replica of a given primary can't failover, then immediately mark it as alive. */
int cant_failover = 1;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a common enough failure mode to have special handling for it? Most people only temporarily disable failover when doing upgrades and they probably shouldn't disable it on all replicas. It's also possible there is another replica we are unaware of that is eligible.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked at the request behind this feature. Seems like user like to place replicas closer to the application for faster reads but don't want it to be part of the cluster failover process. The other use case mentioned is replica used only for backup reasons. Few others don't want automated failover to kick in and only perform the failover(s) manually. All of this documented here: redis/redis#3021

Also I believe with this change we make availability better for cluster with primary only setup. We are accelerating availability where it seems feasible and in the worst case I believe it was always possible to enter a partitioned state and have multiple primaries in a shard.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with this change. ultimately, the truth lies with the primary in question. whether or not a bystander node like this one marks the primary healthy immediately doesn't really change the fact that the primary will regain its primaryship after a flash restart. So the more important thing is the old primary staying down for 2 times cluster-node-timeout; it's less about the observer marking it alive immediately. this approach amplifies an existing problem a bit to trade consistency for availability, which I think is a reasonable decision, provided we are reasonably sure that there won't be replicas competing for the primaryship.

Copy link
Member

@enjoy-binbin enjoy-binbin Apr 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have a zoneid in the internal fork (and we will gossip it), usually replicas in the same az as the primary node will have a better ranking. Of course, replication offset is the top priority. When the offsets are the same, the replicas of the same az have better rank and can initiate elections faster. We almost never use the no-failover configuration option.

Copy link
Collaborator Author

@hpatro hpatro Apr 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@madolson and I were talking about the node placement with zonal awareness thing and we can also improve the voting members placement and avoid placing all of them in same zone. Let me file an issue on this.

@zuiderkwast zuiderkwast requested a review from PingXie April 8, 2025 10:50
@@ -2146,12 +2146,21 @@ void clearNodeFailureIfNeeded(clusterNode *node) {
clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE | CLUSTER_TODO_SAVE_CONFIG);
}

/* If any of the replica of a given primary can't failover, then immediately mark it as alive. */
int cant_failover = 1;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with this change. ultimately, the truth lies with the primary in question. whether or not a bystander node like this one marks the primary healthy immediately doesn't really change the fact that the primary will regain its primaryship after a flash restart. So the more important thing is the old primary staying down for 2 times cluster-node-timeout; it's less about the observer marking it alive immediately. this approach amplifies an existing problem a bit to trade consistency for availability, which I think is a reasonable decision, provided we are reasonably sure that there won't be replicas competing for the primaryship.

Signed-off-by: Harkrishn Patro <[email protected]>
@hpatro hpatro changed the title Mark primary node as alive immediately if reachable and failover is disabled Mark primary node as alive immediately if reachable and failover is not possible Apr 8, 2025
Signed-off-by: Harkrishn Patro <[email protected]>
Copy link
Member

@enjoy-binbin enjoy-binbin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense to me.

break;
}
}

/* If it is a primary and...
* 1) The FAIL state is old enough.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's also update this line to mention the dont_wait.

@@ -4827,14 +4841,19 @@ void clusterHandleReplicaFailover(void) {
* 3) We don't have the no failover configuration set, and this is
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this line need an update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants