Description
Is there an existing issue for this?
- I have searched the existing issues
Kong version ($ kong version
)
3.10
Current Behavior
I've notice that requests without the HTTP Header "Content-Type" set to "application/json" can set their own model, bypassing the model force configuration in the AI Proxy.
If I configure the model name "gpt-4o" in the AI Proxy, this request :
curl -N -X POST -v -H 'Content-Type: application/json' http://KONG_GW:8000/llm/chat/completions --data '{"model": "llama4", "messages": [{"role": "user", "content": "what time is it?"}]}'
Produces the error message : {"error":{"message":"cannot use own model - must be: gpt-4o"}}
Which is correct, because the request tries to query the model "llama4" while the AI Proxy is configured for "gpt-4o"
If I'm doing the same request but stripping the Content-Type header :
curl -N -X POST -v http://KONG_GW:8000/llm/chat/completions --data '{"model": "llama4", "messages": [{"role": "user", "content": "what time is it?"}]}'
I'm getting the result from the backend LLM.
So the clients can use their own model name, bypassing the AI Proxy configuration.
Expected Behavior
The AI Proxy should enforce the model name, in any case.
Steps To Reproduce
No response
Anything else?
No response