Nginx with Go?

2,523 views
Skip to first unread message

Ben Williams

unread,
Feb 26, 2016, 4:42:54 PM2/26/16
to golang-nuts
Hello All,

I have been serving a Go web application with Nginx using a reverse proxy. I have seen some online tutorials talk about using http/2 and using Go with https. My question is, should I be using Nginx to serve my Go app or is Go by itself enough? Any advice would be great, everything that I find when I do a good search is from 3+ years ago and not in the context of http/2 & https.

Shawn Milochik

unread,
Feb 26, 2016, 4:47:38 PM2/26/16
to golang-nuts
On Fri, Feb 26, 2016 at 1:34 PM, Ben Williams <benwill...@gmail.com> wrote:
Hello All,

I have been serving a Go web application with Nginx using a reverse proxy. I have seen some online tutorials talk about using http/2 and using Go with https. My question is, should I be using Nginx to serve my Go app or is Go by itself enough? Any advice would be great, everything that I find when I do a good search is from 3+ years ago and not in the context of http/2 & https.

 I used to serve via nginx, but switched recently. It has a bunch of benefits. For one, you can use resp.RemoteAddr to get the IP of the user. With nginx it's always 127.0.0.1, and you have to resort to having nginx write an X-Forwarded-For header. Also, Go 1.6, with its http/2 (as you mentioned), is great, and serving TLS is exactly as simple as serving HTTP, so why not?

The only tiny bump in the road is that, if you choose to run it as a non-root user (as you should, of course), you can't listen on "low" ports (such as 80 or 443) by default, but that's super-simple to fix:

    sudo /sbin/setcap cap_net_bind_service=+ep yourApp

So I don't see any point in using nginx at all. I don't know anything about performance, but it stands to reason that the nginx "middleman" is adding some latency, but probably not enough to make it a con.


Dave Cheney

unread,
Feb 26, 2016, 4:48:38 PM2/26/16
to golang-nuts
Unless you are using nginx to act as a reverse proxy for a number of domains, you can just skip it and have Go serve traffic directly.

Tamás Gulácsi

unread,
Feb 27, 2016, 1:23:46 AM2/27/16
to golang-nuts
You can try caddy (caddy server.org), a nice simple reverse proxy - written in go, serves http/2 and has auto TLS cert creation with letsencypt!

Jakob Borg

unread,
Feb 27, 2016, 1:56:50 AM2/27/16
to Sh...@milochik.com, golang-nuts
I agree with all of the below, while noting that at least for terminating HTTPS using large RSA certificates nginx does it in about half the CPU time of Go 1.5. I haven't compared with Go 1.6 which does include improvements to at least AES crypto so may be faster overall here. With EC certificates I think the difference is smaller. But there may still be valid reasons, on top of multiplexing multiple paths to separate backend services, to run nginx as a front end speaking plain HTTP to a local Go backend.

//jb
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Manlio Perillo

unread,
Feb 27, 2016, 5:37:27 AM2/27/16
to golang-nuts
Il giorno venerdì 26 febbraio 2016 22:42:54 UTC+1, Ben Williams ha scritto:
Hello All,

I have been serving a Go web application with Nginx using a reverse proxy. I have seen some online tutorials talk about using http/2 and using Go with https. My question is, should I be using Nginx to serve my Go app or is Go by itself enough? Any advice would be great, everything that I find when I do a good search is from 3+ years ago and not in the context of http/2 & https.

One possible benefit is to serve static files, since Nginx was designed for that purpose and offers a lot of options.
But caddy is probably the same. I don't know if caddy proxy supports HTTP2. Nginx does not support it.

Another benefit of a frontend is to be able to sustain a Go server panic.
The standard server never panics, since it do a recovery. However this is bad: the server should terminate, gracefully closing existing connections.
With a frontend, you can keep 2 instances of your application (maybe managed by systemd).
With caddy you can probably write a plugin to automate the task.


Regards  Manlio

Chris Kastorff

unread,
Feb 27, 2016, 1:10:57 PM2/27/16
to Manlio Perillo, golang-nuts
On Sat, Feb 27, 2016 at 2:37 AM, Manlio Perillo <manlio....@gmail.com> wrote:
One possible benefit is to serve static files, since Nginx was designed for that purpose and offers a lot of options.
But caddy is probably the same. I don't know if caddy proxy supports HTTP2. Nginx does not support it.

Nginx does support http2 since version 1.9.5 (released in September 2015); I enabled it on my servers a few weeks ago. (They simultaneously removed support for SPDY.)

Manlio Perillo

unread,
Feb 27, 2016, 1:27:28 PM2/27/16
to golang-nuts, manlio....@gmail.com
Yes, but HTTP2 is not supported by the proxy module.
The proxy_http_version directive only supports 1.0 or 1.1, and 1.1 (keep alive connections) support was added in version 1.1.4.


Manlio

Brian Hatfield

unread,
Feb 27, 2016, 2:40:40 PM2/27/16
to Manlio Perillo, golang-nuts
We used to use Nginx in front of our Go apps. They are *very* high throughput. Removing Nginx and having Go serve the apps directly instead had a ~2X performance / load improvement, allowing us to cut our deployed capacity in half, saving lots of AWS money.

--

Steven Hartland

unread,
Feb 28, 2016, 7:19:50 AM2/28/16
to golan...@googlegroups.com
Interesting Brian, did you identify why the performance change, was it
just there that you where handling every request twice?

What was the reason you put nginx in front in the first place?

Manlio Perillo

unread,
Feb 28, 2016, 10:18:55 AM2/28/16
to golang-nuts, manlio....@gmail.com
Interesting, but I would like to know more details.
Do you serve static files? Do you use TLS? How was Nginx proxy module configured?


Thanks  Manlio

Brian Hatfield

unread,
Feb 28, 2016, 10:32:18 AM2/28/16
to Manlio Perillo, golang-nuts
In this particular context, we did not serve static files from Nginx. We use TLS, but it's terminated on the load balancer, rather than the instance (IOW: no). The proxy module was configured as simply as we could conceive, essentially just proxy_pass. Here's some snippets from how we used to configure it (with a little ERB for filling in certain variables)

upstream <%= @name %> {
  server <%= @server %>;
}


and

proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
<% if @limit_verbs.empty? %>
proxy_pass http://<%= @name %>;
<% else %>
limit_except <%= @limit_verbs.join(' ') %> {
  proxy_pass http://<%= @name %>;
}
<% end %>
break;


Hope that's helpful.

Manlio Perillo

unread,
Feb 28, 2016, 10:39:03 AM2/28/16
to Brian Hatfield, golang-nuts
Sorry if I use interleaved posting, but it is IMHO more pratical
compared to top posting.

On Sun, Feb 28, 2016 at 4:31 PM, Brian Hatfield <bmhat...@gmail.com> wrote:
> In this particular context, we did not serve static files from Nginx. We use
> TLS, but it's terminated on the load balancer, rather than the instance
> (IOW: no).

This means that there was no pratical advantage of using Nginx, unless
you have more than one Go backend.

> The proxy module was configured as simply as we could conceive,
> essentially just proxy_pass.

Note that by default the proxy module uses HTTP 1.0. This means no
keep alive connections:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version

> [...]


Manlio

Brian Hatfield

unread,
Feb 28, 2016, 10:43:42 AM2/28/16
to Manlio Perillo, golang-nuts
Yes, you're quite right that Nginx provided little meaningful value (and has drawbacks, like you pointed out), which is why we stopped deploying in this manner. Why we ever did it in the first place is a bit dependent on context:

We had fronted our Go apps with Nginx because our first production Go app was deployed in July 2013, and there wasn't anywhere *near* the level of deployed applications doing what we're doing at that time. It was a bit of an optimistic hedge against a certain class of security flaws in the Go runtime (they never manifested); ie buffer overflows dealing with malformed HTTP requests, that sort of thing.

By having Nginx in front of Go, we hoped it would at least perform some baseline HTTP, route, and verb validation. Later, we decided it was a waste of CPU cycles, partially because of increased trust in the Go runtime, and partially because we realized we have other layers that may protect it equally well (IE, the load balancers).

Hope that helps.
Brian

Brian Hatfield

unread,
Feb 28, 2016, 10:49:14 AM2/28/16
to Steven Hartland, golang-nuts
Yes, essentially, we were handling every request twice. They are small, high volume requests so that overhead was meaningful in our deployment. See my reply to Manilo for more context on why :-)

Matt Silverlock

unread,
Feb 28, 2016, 11:10:05 AM2/28/16
to golang-nuts
In my experience, you would reverse proxy through nginx when you need to consider some of the below:

* Static file serving (configured correctly with open_file_cache* directives) to cache info about FDs, etc.
* Protection against cache stampedes (serve stale while 500, 502, 503, updating, etc)
* More control over TLS session ticket cache
* Simpler gzip configuration
* Request queuing (you can use this + graceful restart with your Go app to handle application restarts w/o dropping requests)
* Typically ahead of the curve on adopting modern cipher suites (Go only got AES-GCM + RSA support last week in 1.6, and mobile devices have supported this for a while)

You can do a lot of this in Go, but you'd also have to write it, test it, and spend a lot of time making it as performant.

If you're writing network services with small datagram sizes, then 'just' Go might be a better choice—especially for internal tools. If you're operating public-facing applications or web-services, nginx can do a lot for you. Neither of these are hard rules, but good guidelines to consider.

Sean Russell

unread,
Feb 28, 2016, 1:40:19 PM2/28/16
to golang-nuts
I find this interesting, because over the years I've come to the opposite conclusion. Having a proven, reliable reverse proxy that handles basic middleware functions, leaving the applications (Go, or whatever) to focus on a core competency seems like a big win. A reverse proxy can manage load balancing and fail-over, authentication, and access logging, and can hide back-end version changes and service outages; it removes the need to code these common functions into every application, and provides consistency for these functions in a heterogeneous architecture. Whether it's simple NGINX, NGINX+ modules (like Kong), Apache (with a huge library of middleware modules covering nearly any possible authentication need), or a more explicit middleware application like Tyk, a reverse proxy/middleware server is always the first box I draw in an architectural diagram.

So have I been learning the wrong lessons, or am I behind the curve in current cloud architecture trends?

Reply all
Reply to author
Forward
0 new messages