Skip to content

[META] ML Inference Processor Enhancements I  #2839

Closed
@mingshl

Description

@mingshl

Is your feature request related to a problem?

Search response processor:

  • support list in substring during prediction API, so it can support GenAI/RAG use case in ml inference search response with prompt defined in connector level
  • support custom prompt in model config, currently it will cause escape problem, due to model Input change in this issue: [BUG] model_config prefix is changed in payload for ML inference processors  #2822
  • support one to one inference in ml inference search response processor

What solution would you like?
A clear and concise description of what you want to happen.

What alternatives have you considered?
A clear and concise description of any alternative solutions or features you've considered.

Do you have any additional context?
Add any other context or screenshots about the feature request here.

Metadata

Metadata

Assignees

Type

No type

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions