Skip to content

[BUG] Agent Framework: Handle model response when toolUse is not accompanied by text #3755

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -320,7 +320,15 @@ public static Map<String, String> parseLLMOutput(
parseThoughtResponse(modelOutput, thoughtResponse);
} else if (parameters.containsKey(TOOL_CALLS_PATH)) {
modelOutput.put(THOUGHT_RESPONSE, StringUtils.toJson(dataAsMap));
Object response = JsonPath.read(dataAsMap, parameters.get(LLM_RESPONSE_FILTER));
Object response;
boolean isToolUseResponse = false;
try {
response = JsonPath.read(dataAsMap, parameters.get(LLM_RESPONSE_FILTER));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it easier just to check if LLM_RESPONSE_FILTER in parameters.keys then get the response, else you read from the TOOL_CALLS_PATH? this exception is not necessary

Copy link
Collaborator

@mingshl mingshl Apr 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

boolean hasResponseFilter = parameters.containsKey(LLM_RESPONSE_FILTER);

if (hasResponseFilter) {
    response = JsonPath.read(dataAsMap, parameters.get(LLM_RESPONSE_FILTER));
} else {
    response = JsonPath.read(dataAsMap, parameters.get(TOOL_CALLS_PATH));
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

both of the keys will be present in the map for all models supporting function calling:

LLM_RESPONSE_FILTER points to where we can fetch the text returned by the model. Once we fetch the text, the logic checks for whether there is a toolUse block after or not.

TOOL_CALLS_PATH points to where we can fetch a toolUse block returned by the model.

For example, these are values for claude
LLM_RESPONSE_FILTER: $.choices[0].message.content
TOOL_CALLS_PATH: $.choices[0].message.tool_calls

Before this fix, we would always look for LLM_RESPONSE_FILTER and in the case of this payload:

{"metrics":{"latencyMs":2875},"output":{"message":{"content":[{"toolUse":{"input":{"index":"ss4o_logs-2025.04.18","query":{"size":10,"_source":["body","time","severityText","log.attributes","resource.attributes.service.name"],"query":{"bool":{"must":[{"wildcard":{"body":"*cart*"}},{"term":{"severityText":"ERROR"}}]}}}},"name":"SearchIndexTool","toolUseId":"tooluse_c33B609fSbCsgDeTTj8VmA"}}],"role":"assistant"}},"stopReason":"tool_use","usage":{"cacheReadInputTokenCount":0,"cacheReadInputTokens":0,"cacheWriteInputTokenCount":0,"cacheWriteInputTokens":0,"inputTokens":9000,"outputTokens":152,"totalTokens":9152}}

It would lead to an error since there is no text field. Therefore, we then check for toolUse path. This is required as different models have a different response format. A better way to handle this would be to first check for a parent path like $.choices[0].message which might contain both the text and toolUse as children. However, this is not guaranteed as the response heavily depends on the model. Hence, the try-catch.

} catch (PathNotFoundException e) {
// If the regular response path fails, try the tool calls path
response = JsonPath.read(dataAsMap, parameters.get(TOOL_CALLS_PATH));
isToolUseResponse = true;
}

String llmFinishReasonPath = parameters.get(LLM_FINISH_REASON_PATH);
String llmFinishReason = "";
Expand All @@ -330,7 +338,7 @@ public static Map<String, String> parseLLMOutput(
} else {
llmFinishReason = JsonPath.read(dataAsMap, llmFinishReasonPath);
}
if (parameters.get(LLM_FINISH_REASON_TOOL_USE).equalsIgnoreCase(llmFinishReason)) {
if (parameters.get(LLM_FINISH_REASON_TOOL_USE).equalsIgnoreCase(llmFinishReason) || isToolUseResponse) {
List toolCalls = null;
try {
String toolCallsPath = parameters.get(TOOL_CALLS_PATH);
Expand Down
Loading
Loading