Closed as not planned
Description
Describe the bug
Using the neural_query_enricher
processor in a search pipeline allows you to set the default model_id
for a neural query.
However, when trying to use a nested query containing a neural query (e.g. for chunked text embeddings), it still requires you to provide the model_id
.
Related component
Search
To Reproduce
- Create an indexed with a nested knn field.
- Add a search pipeline with a default_model_id, as per https://opensearch.org/docs/latest/search-plugins/semantic-search/#setting-a-default-model-on-an-index-or-field
- Perform a nested query e.g.
GET /my-index/_search
{
"query": {
"nested": {
"score_mode": "max",
"path": "text_embedding",
"query": {
"neural": {
"text_embedding.knn": {
"query_text": "my query text"
}
}
}
}
}
}
- 500 Error is returned:
[HTTP/1.1 500 Internal Server Error]
{"error":{"root_cause":[{"type":"null_pointer_exception","reason":"modelId is marked non-null but is null"}],"type":"null_pointer_exception","reason":"modelId is marked non-null but is null"},"status":500}
Expected behavior
Expect a 200 response with search results
Additional Details
Plugins
ml
Host/Environment (please complete the following information):
- Amazon OpenSearch 2.13
Metadata
Metadata
Assignees
Type
Projects
Status
✅ Done