You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have seen related discussion: #341
and a related PR: #654
But seems like function tools don't support returning images as outputs yet.
I wonder what's the best workaround we'd have around this, or whether including images in the outputs would make sense for my use case?
For context, I'm building a PagerDuty alert root cause analysis agent with access to tools like this:
agent = Agent("You are an expert SRE agent. Help me diagnose the root cause.",
tools = [search_logs_on_elasticsearch, check_panel_on_grafana]
)
For the check_panel_on_grafana tool, since time series data could be huge, I was thinking I'd first plot the data as an image, and then feed the image into LLM along with some descriptions (start time, end time, panel name, etc.).
I was thinking of just returning both the image and the text directly in the function output. Seems like that's not supported yet though.
Is my best workaround something like this? Call LLM directly and return the results?
@function_tool
def check_panel_on_grafana():
data = get_data_from_grafana()
graph = plot_graph(data)
description = "cool description"
prompt = "describe the image as thoroughly as possible"
result = call_chatgpt_directly(prompts, messages=[{graph, description}])
return result
but i guess call_chatgpt_directly won't have context to all the previous actions done by the agent thus far, and also, we only return text so all future actions won't get to see the actual image.
The text was updated successfully, but these errors were encountered:
You're correct in identifying that current function_tool support in the OpenAI Agents SDK does not yet support returning images as function outputs directly in a structured or visualizable way
No structured way to keep an image in the agent’s memory for later tools.
Save the image and upload a public or internal URL.
Plot the graph and upload it to a temporary cloud file store (e.g., S3, Cloudflare R2, or file.io for throwaway uploads). Then return the URL and description in the tool output.
@function_tooldefcheck_panel_on_grafana(start_time: str, end_time: str):
data=get_data_from_grafana(start_time, end_time)
image_path=plot_graph_and_save(data) # Save locallyimage_url=upload_to_temp_storage(image_path) # Upload to clouddescription=f"Panel from {start_time} to {end_time}"return {
"image_url": image_url,
"description": description
}
connet with the agent
agent=Agent(
name="agent name...",
instructions="your system prompt",
tools=[
check_panel_on_grafana,
analyze_grafana_panel,
search_logs_on_elasticsearch
]
)
# run the code with runner class
I have seen related discussion: #341
and a related PR: #654
But seems like function tools don't support returning images as outputs yet.
I wonder what's the best workaround we'd have around this, or whether including images in the outputs would make sense for my use case?
For context, I'm building a PagerDuty alert root cause analysis agent with access to tools like this:
For the
check_panel_on_grafana
tool, since time series data could be huge, I was thinking I'd first plot the data as an image, and then feed the image into LLM along with some descriptions (start time, end time, panel name, etc.).I was thinking of just returning both the image and the text directly in the function output. Seems like that's not supported yet though.
Is my best workaround something like this? Call LLM directly and return the results?
but i guess
call_chatgpt_directly
won't have context to all the previous actions done by the agent thus far, and also, we only return text so all future actions won't get to see the actual image.The text was updated successfully, but these errors were encountered: