Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support lua proxy upstream #12086

Open
wants to merge 19 commits into
base: master
Choose a base branch
from

Conversation

LiteSun
Copy link
Member

@LiteSun LiteSun commented Mar 26, 2025

Description

Currently, APISIX cannot support making HTTP requests during the body_filter phase.

This PR adds a lua_body_filter phase, which uses Lua to make HTTP requests to upstream services, bypassing Nginx's default upstream request behavior.

This allows for more flexible request operations in the lua_body_filter phase.

The lua_body_filter phase executes before header_filter and body_filter phases.

Checklist

  • I have explained the need for this PR and the problem it solves
  • I have explained the changes or the new features added to this PR
  • I have added tests corresponding to this change
  • I have updated the documentation to reflect this change
  • I have verified that this change is backward compatible (If not, please discuss on the APISIX mailing list first)

@LiteSun LiteSun changed the title feat: add proxy_nginx_upstream param feat: add lua_proxy_upstream param Mar 27, 2025
@LiteSun LiteSun marked this pull request as ready for review March 31, 2025 03:57
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. enhancement New feature or request labels Mar 31, 2025
@LiteSun LiteSun changed the title feat: add lua_proxy_upstream param feat: support lua proxy upstream Mar 31, 2025
@@ -85,6 +90,10 @@ function _M.access(conf, ctx)
core.log.warn("plugin access phase, conf: ", core.json.encode(conf))
-- return 200, {message = "hit example plugin"}

if conf.lua_proxy_upstream then
Copy link
Contributor

@AlinsRan AlinsRan Apr 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a feature PR that has added plugin configuration and requires updating schema.

common_phase("before_proxy")
return
end

-- run the before_proxy method in access phase first to avoid always reinit request
common_phase("before_proxy")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

common_phase("before_proxy")
if api_ctx.lua_proxy_upstream then
     return
end

@@ -550,6 +550,12 @@ function _M.handle_upstream(api_ctx, route, enable_websocket)

set_upstream_headers(api_ctx, server)

-- lua proxy the request to upstream
if api_ctx.lua_proxy_upstream then
Copy link
Member

@nic-6443 nic-6443 Apr 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about the naming of the existing variables bypass_nginx_upstream and lua_proxy_upstream, and felt that their meanings are unclear. I think we can design them as two:

  • bypass_nginx_upstream to indicate bypassing nginx's upstream module
  • bypass_balancer to indicate bypassing the existing balancer logic
    cc @membphis What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with this

end

local headers = core.request.headers(ctx)
-- When content-length is cleared, the HTTP server will automatically calculate and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
-- When content-length is cleared, the HTTP server will automatically calculate and
-- When content-length is cleared, the HTTP client will automatically calculate and

Comment on lines +89 to +93
-- TODO: support event stream
if content_type and core.string.find(content_type, "text/event-stream") then
core.log.error("event stream is not supported")
return HTTP_INTERNAL_SERVER_ERROR
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This already has example code in ai-proxy.

"example-plugin": {
"i": 0,
"lua_proxy_upstream": true,
"request_uri": "http://httpbin.org/get",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename this request_uri to indicate it's used to change the rseponse body, it make me confused with upstream.nodes

Comment on lines +135 to +136
core.log.warn("plugin lua_body_filter phase, conf: ", core.json.encode(conf))
core.log.warn("plugin lua_body_filter phase, body: ", core.json.encode(body))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can assert those log in test cases to confirm the body come from upstream is passed to lua_body_filter as expect.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto




=== TEST 14: lua body filter
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to add more test cases that combine example-plugin with other plugins like proxy-rewrite, which modify downstream requests. This will help verify that the new Lua HTTP client way works with existing plugins in the rewrite/access phase.

headers = headers,
ssl_verify = conf.ssl_verify,
keepalive = conf.keepalive,
keepalive_timeout = conf.timeout,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This timeout is unclear.
The meanings of connection timeout and idle connection timeout are different.
I suggest using keepalive_timeout

-- Get request body
local body, err = core.request.get_body()
if err then
core.log.error("failed to get request body: ", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why isn't there a return for this error?

@@ -107,6 +116,49 @@ function _M.access(conf, ctx)
return
end


function _M.before_proxy(conf, ctx)
core.log.warn("plugin before_proxy phase, conf: ", core.json.encode(conf))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a warm level log, please use core.json.delay_encode

local code, body = read_response(ctx, res)

-- Set keepalive for connection reuse
if opts.keepalive then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I not need to close the connection when the keepalive is false?

Copy link
Contributor

@bzp2010 bzp2010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the mean of another complex mechanism introduced by this PR? It's introducing a new plugin phase again, and continuing to introduce new learning costs, and I can't accept it without significant justification.

Now, even a professional APISIX developer would be hard pressed to say what APISIX phases are and how they correspond to NGINX phases.

How does it differ from performing HTTP requests at rewrite/access using lua-resty-http?
Has the solution been properly evaluated for performance differences with access phase + subrequests? This would also allow you to implement running external requests in the header_filter and body_filter phases, and there would be no need to introduce new request phases, just a new core API is enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants