Description
** Please make sure you read the contribution guide and file the issues in the right place. **
Contribution guide.
Is your feature request related to a problem? Please describe.
When using vector search for grounding in the Google ADK's LLM Agent, the datastore_id, search_id, and filter parameters are currently static. This limitation prevents us from dynamically adjusting the grounding filter based on real-time user queries or contextual data. For example, in a multi-tenant application, we might need to filter grounding results by a specific user_id or company_id that changes with each user interaction. Similarly, we might want to apply different filters based on the detected intent of a user's query.
While I can currently work around this by modifying the filter within a "before model" callback, this approach feels less intuitive and less integrated with the state management capabilities often present in conversational AI frameworks.
Describe the solution you'd like
I would like to request the ability to dynamically configure the filter parameter for vector search grounding in the LLM Agent. Specifically, it would be highly beneficial if the filter parameter (and potentially datastore_id and search_id as well) could read its value from a state variable within the LLM Agent's context.
This would allow for more flexible and context-aware grounding by enabling:
- Dynamic Filtering: Adjusting search filters based on user input, session variables, or other dynamic data.
- Simplified Logic: Centralizing filter logic within state management rather than requiring direct manipulation in callbacks.
- Improved Maintainability: Making it easier to manage and update filtering rules as application requirements evolve.
Describe alternatives you've considered
As mentioned, the primary alternative I'm currently employing is to modify the filter dynamically within a "before model" callback.
Additional context
Add any other context or screenshots about the feature request here.