Skip to content

[Concurrent Segment Search] Enforce max_bucket setting at shard level reduce #12916

Open
@sohami

Description

@sohami

Is your feature request related to a problem? Please describe

OpenSearch has a search.max_buckets settings to limit the number of buckets collected for each search request across shards. This gets evaluated in the final reduce phase on the coordinator. During the shard level request, for some aggregations it limits the number of collected buckets per aggregation via the aggregation parameters such as size field supported by composite agg. For others it uses CircuitBreakers to limit the memory usage. For concurrent search case, the size will be per slice, but across slice we can still collect more than max_buckets on each shard.

Describe the solution you'd like

During shard level reduce with concurrent segment search, we don't check for the collected bucket count. I think we can improve that to check in shard level reduce as well, to avoid overloading coordinator in these cases. This will avoid shards from returning more buckets than the limit.

Related component

Search:Resiliency

Describe alternatives you've considered

Don't enforce the bucket limit and let coordinator handle the validation

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    Status

    Later (6 months plus)

    Status

    Todo

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions