-
Notifications
You must be signed in to change notification settings - Fork 767
Mark primary node as alive immediately if reachable and failover is not possible #1927
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: unstable
Are you sure you want to change the base?
Conversation
Signed-off-by: Harkrishn Patro <[email protected]>
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## unstable #1927 +/- ##
============================================
- Coverage 71.07% 71.06% -0.02%
============================================
Files 123 123
Lines 65683 65778 +95
============================================
+ Hits 46687 46743 +56
- Misses 18996 19035 +39
🚀 New features to boost your workflow:
|
src/cluster_legacy.c
Outdated
@@ -2146,12 +2146,21 @@ void clearNodeFailureIfNeeded(clusterNode *node) { | |||
clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE | CLUSTER_TODO_SAVE_CONFIG); | |||
} | |||
|
|||
/* If any of the replica of a given primary can't failover, then immediately mark it as alive. */ | |||
int cant_failover = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a common enough failure mode to have special handling for it? Most people only temporarily disable failover when doing upgrades and they probably shouldn't disable it on all replicas. It's also possible there is another replica we are unaware of that is eligible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked at the request behind this feature. Seems like user like to place replicas closer to the application for faster reads but don't want it to be part of the cluster failover process. The other use case mentioned is replica used only for backup reasons. Few others don't want automated failover to kick in and only perform the failover(s) manually. All of this documented here: redis/redis#3021
Also I believe with this change we make availability better for cluster with primary only setup. We are accelerating availability where it seems feasible and in the worst case I believe it was always possible to enter a partitioned state and have multiple primaries in a shard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with this change. ultimately, the truth lies with the primary in question. whether or not a bystander node like this one marks the primary healthy immediately doesn't really change the fact that the primary will regain its primaryship after a flash restart. So the more important thing is the old primary staying down for 2 times cluster-node-timeout; it's less about the observer marking it alive immediately. this approach amplifies an existing problem a bit to trade consistency for availability, which I think is a reasonable decision, provided we are reasonably sure that there won't be replicas competing for the primaryship.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have a zoneid in the internal fork (and we will gossip it), usually replicas in the same az as the primary node will have a better ranking. Of course, replication offset is the top priority. When the offsets are the same, the replicas of the same az have better rank and can initiate elections faster. We almost never use the no-failover configuration option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@madolson and I were talking about the node placement with zonal awareness thing and we can also improve the voting members placement and avoid placing all of them in same zone. Let me file an issue on this.
src/cluster_legacy.c
Outdated
@@ -2146,12 +2146,21 @@ void clearNodeFailureIfNeeded(clusterNode *node) { | |||
clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE | CLUSTER_TODO_SAVE_CONFIG); | |||
} | |||
|
|||
/* If any of the replica of a given primary can't failover, then immediately mark it as alive. */ | |||
int cant_failover = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with this change. ultimately, the truth lies with the primary in question. whether or not a bystander node like this one marks the primary healthy immediately doesn't really change the fact that the primary will regain its primaryship after a flash restart. So the more important thing is the old primary staying down for 2 times cluster-node-timeout; it's less about the observer marking it alive immediately. this approach amplifies an existing problem a bit to trade consistency for availability, which I think is a reasonable decision, provided we are reasonably sure that there won't be replicas competing for the primaryship.
Signed-off-by: Harkrishn Patro <[email protected]>
Signed-off-by: Harkrishn Patro <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sense to me.
break; | ||
} | ||
} | ||
|
||
/* If it is a primary and... | ||
* 1) The FAIL state is old enough. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's also update this line to mention the dont_wait.
@@ -4827,14 +4841,19 @@ void clusterHandleReplicaFailover(void) { | |||
* 3) We don't have the no failover configuration set, and this is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this line need an update.
Signed-off-by: Binbin <[email protected]>
Mark primary node as alive immediately if reachable and failover is not possible
Added test case for failover explicity disabled via
cluster-replica-no-failover
. Currently, we wait forcluster_node_timeout * 2
period to mark a failed primary as alive if we are able to communicate with it. For scenario, where a failover won't get triggered, we can mark it as immediately available for better availability.Before
After
The last test
no failover - cluster is in healthy state
showcases the cluster state reached took
after 1ms (with this change) compared to 10ms (unstable) where cluster node timeout is set to 5 ms.