You can still use proxy_pass within content_by_lua, with a little indirection. That's what I was doing before I rewrote my code to use the lua-resty-http library I was recommending you consider. But it does have downsides, as Thibault indicated. What I had been doing previously was setting up a 'proxy' location:
location /proxy {
rewrite ^/proxy/+(.*) /$1 break;
...
proxy_pass http://$proxy_host;
}
And then calling it via ngx.location.capture from within the content_by_lua block:
location / {
content_by_lua_block {
...
local response = ngx.location.capture("/proxy" .. $request_uri, {
method = $request_method,
body = $request_body,
}
...
for k, v in pairs(response.header) do
ngx.header[k] = v
end
ngx.header["date"] = ngx.http_time(ngx.time())
ngx.status = response.status
if response.body then
ngx.print(response.body)
end
}
}
The downside is that the entire proxied request is captured before 'response' is returned by ngx.location.capture, so if it's large and slow then you are forced to wait for it. NGINX will even do things like automatically streaming to a local tempfile if it's a very large (or very slow, I think) response so as to not overwhelm the memory resources.
This is the reason I ended up switching to lua-resty-http and implementing streaming response logic with it. While the technique above worked fine for proxying requests to services that returned small payloads quickly, the delay to start serving things like large pdfs was noticeable. The total time to serve via /proxy wasn't actually bad, but a 3-second delay before the response starting appearing to the client was.