-
Notifications
You must be signed in to change notification settings - Fork 782
Mark primary node as alive immediately if reachable and failover is not possible #1927
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
hpatro
wants to merge
4
commits into
valkey-io:unstable
Choose a base branch
from
hpatro:clear_fail_flag_nofailover
base: unstable
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+84
−4
Open
Changes from 1 commit
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2146,12 +2146,21 @@ void clearNodeFailureIfNeeded(clusterNode *node) { | |
clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE | CLUSTER_TODO_SAVE_CONFIG); | ||
} | ||
|
||
/* If any of the replica of a given primary can't failover, then immediately mark it as alive. */ | ||
int cant_failover = 1; | ||
for (int j = 0; j < node->num_replicas; j++) { | ||
if (!clusterNodeIsNoFailover(node->replicas[j])) { | ||
cant_failover = 0; | ||
break; | ||
} | ||
} | ||
|
||
/* If it is a primary and... | ||
* 1) The FAIL state is old enough. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. let's also update this line to mention the dont_wait. |
||
* 2) It is yet serving slots from our point of view (not failed over). | ||
* Apparently no one is going to fix these slots, clear the FAIL flag. */ | ||
if (clusterNodeIsVotingPrimary(node) && | ||
(now - node->fail_time) > (server.cluster_node_timeout * CLUSTER_FAIL_UNDO_TIME_MULT)) { | ||
((now - node->fail_time) > (server.cluster_node_timeout * CLUSTER_FAIL_UNDO_TIME_MULT) || cant_failover)) { | ||
hpatro marked this conversation as resolved.
Show resolved
Hide resolved
|
||
serverLog( | ||
LL_NOTICE, | ||
"Clear FAIL state for node %.40s (%s): is reachable again and nobody is serving its slots after some time.", | ||
|
@@ -4735,6 +4744,10 @@ void clusterLogCantFailover(int reason) { | |
case CLUSTER_CANT_FAILOVER_WAITING_DELAY: msg = "Waiting the delay before I can start a new failover."; break; | ||
case CLUSTER_CANT_FAILOVER_EXPIRED: msg = "Failover attempt expired."; break; | ||
case CLUSTER_CANT_FAILOVER_WAITING_VOTES: msg = "Waiting for votes, but majority still not reached."; break; | ||
case CLUSTER_CANT_FAILOVER_DISABLED: | ||
msg = "Failover has been disabled. " | ||
"Please check the 'cluster-replica-no-failover' configuration option"; | ||
enjoy-binbin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
break; | ||
default: serverPanic("Unknown cant failover reason code."); | ||
} | ||
lastlog_time = time(NULL); | ||
|
@@ -4827,14 +4840,19 @@ void clusterHandleReplicaFailover(void) { | |
* 3) We don't have the no failover configuration set, and this is | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this line need an update. |
||
* not a manual failover. */ | ||
if (clusterNodeIsPrimary(myself) || myself->replicaof == NULL || | ||
(!nodeFailed(myself->replicaof) && !manual_failover) || | ||
(server.cluster_replica_no_failover && !manual_failover)) { | ||
(!nodeFailed(myself->replicaof) && !manual_failover)) { | ||
/* There are no reasons to failover, so we set the reason why we | ||
* are returning without failing over to NONE. */ | ||
server.cluster->cant_failover_reason = CLUSTER_CANT_FAILOVER_NONE; | ||
return; | ||
} | ||
|
||
if (server.cluster_replica_no_failover && !manual_failover) { | ||
server.cluster->cant_failover_reason = CLUSTER_CANT_FAILOVER_DISABLED; | ||
clusterLogCantFailover(CLUSTER_CANT_FAILOVER_DISABLED); | ||
return; | ||
} | ||
|
||
/* Set data_age to the number of milliseconds we are disconnected from | ||
* the primary. */ | ||
if (server.repl_state == REPL_STATE_CONNECTED) { | ||
|
@@ -6602,7 +6620,7 @@ int clusterNodeIsFailing(clusterNode *node) { | |
} | ||
|
||
int clusterNodeIsNoFailover(clusterNode *node) { | ||
return node->flags & CLUSTER_NODE_NOFAILOVER; | ||
return nodeCantFailover(node); | ||
} | ||
|
||
const char **clusterDebugCommandExtendedHelp(void) { | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a common enough failure mode to have special handling for it? Most people only temporarily disable failover when doing upgrades and they probably shouldn't disable it on all replicas. It's also possible there is another replica we are unaware of that is eligible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked at the request behind this feature. Seems like user like to place replicas closer to the application for faster reads but don't want it to be part of the cluster failover process. The other use case mentioned is replica used only for backup reasons. Few others don't want automated failover to kick in and only perform the failover(s) manually. All of this documented here: redis/redis#3021
Also I believe with this change we make availability better for cluster with primary only setup. We are accelerating availability where it seems feasible and in the worst case I believe it was always possible to enter a partitioned state and have multiple primaries in a shard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with this change. ultimately, the truth lies with the primary in question. whether or not a bystander node like this one marks the primary healthy immediately doesn't really change the fact that the primary will regain its primaryship after a flash restart. So the more important thing is the old primary staying down for 2 times cluster-node-timeout; it's less about the observer marking it alive immediately. this approach amplifies an existing problem a bit to trade consistency for availability, which I think is a reasonable decision, provided we are reasonably sure that there won't be replicas competing for the primaryship.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have a zoneid in the internal fork (and we will gossip it), usually replicas in the same az as the primary node will have a better ranking. Of course, replication offset is the top priority. When the offsets are the same, the replicas of the same az have better rank and can initiate elections faster. We almost never use the no-failover configuration option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@madolson and I were talking about the node placement with zonal awareness thing and we can also improve the voting members placement and avoid placing all of them in same zone. Let me file an issue on this.