Skip to content

Search only replicas (scale to zero) with Reader/Writer Separation #17299

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 30 commits into from
Apr 7, 2025

Conversation

prudhvigodithi
Copy link
Member

@prudhvigodithi prudhvigodithi commented Feb 7, 2025

Description

  • The primary goal is to allow users to designate an index as search-only allowing only to have the search only replicas running when enabled via an API call _searchonly/enable (can be disabled as _searchonly/disable).

  • With _searchonly/enable for an index the process has Two-Phase Scale-Down with a temporary block for the duration of the scale-down operation and then explicitly replace it with a permanent block once all prerequisites (e.g., shard sync, flush, metadata updates) have been met.

From #17299 (comment) using _scale API with search_only set to true or false.

curl -X POST "http://localhost:9200/my-index/_scale" \
-H "Content-Type: application/json" \
-d '{
  "search_only": true
}' 

curl -X POST "http://localhost:9200/my-index/_scale" \
-H "Content-Type: application/json" \
-d '{
  "search_only": false
}'

Related Issues

#16720 and part of #15306

Check List

  • Functionality includes testing.
  • API changes companion pull request created, if applicable.
  • Public documentation issue/PR created, if applicable.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Copy link
Contributor

github-actions bot commented Feb 7, 2025

❌ Gradle check result for e89b812: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@prudhvigodithi
Copy link
Member Author

While I refactor the code and add additional tests, I’m creating this PR to gather early feedback please take a look and add your thoughts. I will share the testing results in the comments. Thanks!
@mch2 @shwetathareja @msfroh @getsaurabh02

Copy link
Contributor

github-actions bot commented Feb 7, 2025

❌ Gradle check result for 1bd7c6a: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@prudhvigodithi
Copy link
Member Author

I went through and tested the following scenarios

Scenario 1: Search-Only Replicas Recovery with Persistent Data Directory and when cluster.remote_store.state.enabled is set to false

With the following settings, OpenSearch was started using:

./gradlew clean run -PnumNodes=6 --data-dir=/tmp/foo
OpenSearch settings

    
setting 'path.repo', '["/tmp/my-repo"]'
setting 'opensearch.experimental.feature.read.write.split.enabled', 'true'
setting 'node.attr.remote_store.segment.repository', 'my-repository'
setting 'node.attr.remote_store.translog.repository', 'my-repository'
setting 'node.attr.remote_store.repository.my-repository.type', 'fs'
setting 'node.attr.remote_store.state.repository', 'my-repository'
setting 'node.attr.remote_store.repository.my-repository.settings.location', '/tmp/my-repo'
    
  

Shard Allocation Before Recovery

curl -X GET "localhost:9200/_cat/shards/my-index?v&h=index,shard,prirep,state,unassigned.reason,node,searchOnly"

index    shard prirep state   unassigned.reason node
my-index 0     p      STARTED                   runTask-0
my-index 0     s      STARTED                   runTask-4
my-index 0     r      STARTED                   runTask-2
my-index 1     p      STARTED                   runTask-3
my-index 1     r      STARTED                   runTask-1
my-index 1     s      STARTED                   runTask-5

On restart (terminate the process) everything comes back as running. With search only enabled (/_searchonly/enable) after restart only search replicas are up as running and works as expected.

curl -X GET "localhost:9200/_cat/shards/my-index?v&h=index,shard,prirep,state,unassigned.reason,node,searchOnly"
index    shard prirep state   unassigned.reason node
my-index 0     s      STARTED                   runTask-2
my-index 1     s      STARTED                   runTask-1

Scenario 2: No Data Directory Preservation and when cluster.remote_store.state.enabled is set t o false – Index Lost After process Restart (Recovery)

In this scenario, OpenSearch is started without preserving the data directory, meaning that all local shard data is lost upon Recovery.

./gradlew clean run -PnumNodes=6
OpenSearch settings

    
setting 'path.repo', '["/tmp/my-repo"]'
setting 'opensearch.experimental.feature.read.write.split.enabled', 'true'
setting 'node.attr.remote_store.segment.repository', 'my-repository'
setting 'node.attr.remote_store.translog.repository', 'my-repository'
setting 'node.attr.remote_store.repository.my-repository.type', 'fs'
setting 'node.attr.remote_store.state.repository', 'my-repository'
setting 'node.attr.remote_store.repository.my-repository.settings.location', '/tmp/my-repo'
    
  

Behavior After Recovery:

  • Upon terminating the process and restarting OpenSearch, the index is completely lost.
  • Any attempt to retrieve the shard state results in an index not found exception.
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index [my-index]","index":"my-index","resource.id":"my-index","resource.type":"index_or_alias","index_uuid":"_na_"}],"type":"index_not_found_exception","reason":"no such index [my-index]","index":"my-index","resource.id":"my-index","resource.type":"index_or_alias","index_uuid":"_na_"},"status":404}%
  • Even With Remote Restore _remotestore/_restore?restore_all_shards=true, index remains unavailable.
  • Even after recreating the index manually and attempting to restore, documents do not get picked up.
  • Since during testing --data-dir was not used, local data (including cluster metadata) is wiped on recovery.
  • Because the cluster state is lost, OpenSearch no longer has any reference to index.

Scenario 3: Cluster Remote Store State Enabled (cluster.remote_store.state.enabled is set to true and with no persistent data directory) – Primary Shards Remain Unassigned After Recovery.

./gradlew clean run -PnumNodes=6
OpenSearch settings

    
setting 'path.repo', '["/tmp/my-repo"]'
setting 'opensearch.experimental.feature.read.write.split.enabled', 'true'
setting 'node.attr.remote_store.segment.repository', 'my-repository'
setting 'node.attr.remote_store.translog.repository', 'my-repository'
setting 'node.attr.remote_store.repository.my-repository.type', 'fs'
setting 'node.attr.remote_store.state.repository', 'my-repository'
setting 'node.attr.remote_store.repository.my-repository.settings.location', '/tmp/my-repo'
setting 'cluster.remote_store.state.enabled', 'true'
    
  

Shard Allocation After Recovery

curl -X GET "localhost:9200/_cat/shards/my-index?v&h=index,shard,prirep,state,unassigned.reason,node,searchOnly"
index    shard prirep state      unassigned.reason node
my-index 0     p      UNASSIGNED CLUSTER_RECOVERED 
my-index 0     s      UNASSIGNED CLUSTER_RECOVERED 
my-index 0     r      UNASSIGNED CLUSTER_RECOVERED 
my-index 1     p      UNASSIGNED CLUSTER_RECOVERED 
my-index 1     r      UNASSIGNED CLUSTER_RECOVERED 
my-index 1     s      UNASSIGNED CLUSTER_RECOVERED 

Issue: Primary Shards Remain Unassigned
Despite cluster.remote_store.state.enabled is true, the primary shards are not automatically assigned after restart ( replicating the recovery). The error message states:

"allocate_explanation": "cannot allocate because a previous copy of the primary shard existed but can no longer be found on the nodes in the cluster"
  • Remote store only contains segments and translogs, NOT active shard copies.
  • Since --data-dir was not used (data directory is not used), local copies of primary shards are lost.
  • OpenSearch does not automatically restore primaries from the remote store without explicit intervention.
curl -X POST "http://localhost:9200/_remotestore/_restore" -H 'Content-Type: application/json' -d'  
{                
  "indices": ["my-index"]
}
'
  • However with this PR, when _searchonly is enabled Search-Only Replicas Recover Without a Primary. Since cluster.remote_store.state.enabled is true, OpenSearch remembers the index exists after restart. The allocation logic skips checking for an active primary for search-only replicas.This allows search-only replicas to be assigned to a node, even without an existing primary. However without _searchonly the behavior is same for all replicas, wanted to give an advantage for users with _searchonly enabled indicies, for these indices they should not care _remotestore/_restore as we are not dealing with primaries.

    • Search-only replicas can recover automatically from the remote store.
    • Search queries remain functional.
    • Cluster state correctly remembers the index, but does not bring up primaries as _searchonly is enabled.
  • The default behavior is OpenSearch does not assume lost primaries should be re-created from remote storage.It waits for explicit user intervention to restore primary shards (_remotestore/_restore). Is this by design ?

Scenario 4: Persistent Data Directory with Remote Store State – Seamless Recovery of Primaries, replicas and search-only replicas

./gradlew clean run -PnumNodes=6 --data-dir=/tmp/foo
OpenSearch settings

    
setting 'path.repo', '["/tmp/my-repo"]'
setting 'opensearch.experimental.feature.read.write.split.enabled', 'true'
setting 'node.attr.remote_store.segment.repository', 'my-repository'
setting 'node.attr.remote_store.translog.repository', 'my-repository'
setting 'node.attr.remote_store.repository.my-repository.type', 'fs'
setting 'node.attr.remote_store.state.repository', 'my-repository'
setting 'node.attr.remote_store.repository.my-repository.settings.location', '/tmp/my-repo'
setting 'cluster.remote_store.state.enabled', 'true'
    
  

Upon recovery (no intervention is required )

curl -X GET "localhost:9200/_cat/shards/my-index?v&h=index,shard,prirep,state,unassigned.reason,node,searchOnly"
index    shard prirep state   unassigned.reason node
my-index 0     p      STARTED                   runTask-0
my-index 0     r      STARTED                   runTask-2
my-index 0     s      STARTED                   runTask-4
my-index 1     p      STARTED                   runTask-5
my-index 1     r      STARTED                   runTask-3
my-index 1     s      STARTED                   runTask-1
  • All primary and replica shards successfully recover since the cluster metadata is retained in the persistent data directory.

If search-only mode is enabled to index, OpenSearch correctly brings up only search replicas while removing primary and regular replicas.

curl -X GET "localhost:9200/_cat/shards/my-index?v&h=index,shard,prirep,state,unassigned.reason,node,searchOnly"
index    shard prirep state   unassigned.reason node
my-index 0     s      STARTED                   runTask-3
my-index 1     s      STARTED                   runTask-3
  • Only search replicas (SORs) are restored, as expected.

@prudhvigodithi
Copy link
Member Author

Coming from #17299 (comment) @shwetathareja can you please go over scenario 2 and 3 and if it make sense. I wanted to understand why _remotestore/_restore is required in these scenarios and I wanted to give advantage for users ti remove this intervention for search only indices.
Thanks
@mch2

@prudhvigodithi prudhvigodithi force-pushed the searchonly-2 branch 3 times, most recently from 8f1d4ea to 7fa5133 Compare February 7, 2025 23:32
Copy link
Contributor

github-actions bot commented Feb 7, 2025

❌ Gradle check result for 7fa5133: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@prudhvigodithi
Copy link
Member Author

I have updated the PR to adjust the cluster health configuration using only search replicas and to incorporate the changes made when _searchonly is enabled, the change is not too big hence going with the same PR.

Copy link
Contributor

github-actions bot commented Feb 8, 2025

❌ Gradle check result for 64bb954: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@prudhvigodithi prudhvigodithi self-assigned this Feb 10, 2025
@github-actions github-actions bot added enhancement Enhancement or improvement to existing feature or request Roadmap:Search Project-wide roadmap label Search:Performance v3.0.0 Issues and PRs related to version 3.0.0 labels Feb 12, 2025
Copy link
Contributor

❌ Gradle check result for 470c0ea: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@prudhvigodithi
Copy link
Member Author

Adding @sachinpkale can you please take a look at this comment #17299 (comment) and provide your thoughts to understand why _remotestore/_restore is required (Scenario 3 from #17299 (comment)) and why the cluster cannot be auto recovered, is there any strong reason for this manual intervention to run the API?

curl -X GET "localhost:9200/_cat/shards/my-index?v&h=index,shard,prirep,state,unassigned.reason,node,searchOnly"
index    shard prirep state      unassigned.reason node
my-index 0     p      UNASSIGNED CLUSTER_RECOVERED 
my-index 0     s      UNASSIGNED CLUSTER_RECOVERED 
my-index 0     r      UNASSIGNED CLUSTER_RECOVERED 
my-index 1     p      UNASSIGNED CLUSTER_RECOVERED 
my-index 1     s      UNASSIGNED CLUSTER_RECOVERED 
my-index 1     r      UNASSIGNED CLUSTER_RECOVERED 

I dint get much info from the docs https://opensearch.org/docs/latest/tuning-your-cluster/availability-and-recovery/remote-store/index/#restoring-from-a-backup.

Copy link
Contributor

github-actions bot commented Apr 1, 2025

❌ Gradle check result for d7dbfa1: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

Copy link
Contributor

github-actions bot commented Apr 1, 2025

❌ Gradle check result for 9628f0f: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

Signed-off-by: Prudhvi Godithi <[email protected]>
Signed-off-by: Prudhvi Godithi <[email protected]>
Copy link
Contributor

github-actions bot commented Apr 1, 2025

❌ Gradle check result for d891a8e: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

Copy link
Contributor

github-actions bot commented Apr 1, 2025

❕ Gradle check result for d891a8e: UNSTABLE

Please review all flaky tests that succeeded after retry and create an issue if one does not already exist to track the flaky failure.

Copy link
Contributor

github-actions bot commented Apr 2, 2025

❌ Gradle check result for 7064efc: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

Copy link
Contributor

github-actions bot commented Apr 2, 2025

✅ Gradle check result for 4f3b2a5: SUCCESS

@mch2
Copy link
Member

mch2 commented Apr 2, 2025

I think this is in a good state, I'd like to get this merged so we can bake this and iterate if needed before 3.0 cutoff.
@shwetathareja @Bukhtawar @msfroh Wondering if any of you would like to make another pass here, will hold for another day given the size of this.

Copy link
Contributor

github-actions bot commented Apr 4, 2025

❌ Gradle check result for 6cd6033: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

Signed-off-by: Prudhvi Godithi <[email protected]>
Signed-off-by: Prudhvi Godithi <[email protected]>
Copy link
Contributor

github-actions bot commented Apr 5, 2025

✅ Gradle check result for ec3cde7: SUCCESS

@prudhvigodithi
Copy link
Member Author

Thanks @mch2 I have resolved the conversations/comments which should now allow maintainers to merge the PR.
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Enhancement or improvement to existing feature or request Roadmap:Search Project-wide roadmap label Search:Performance v3.0.0 Issues and PRs related to version 3.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[RW Separation] [Feature Request] Scale to Zero (Indexing Shards) with Reader/Writer Separation.
9 participants