Using Varnish in Cloud Foundry

187 views
Skip to first unread message

Phillip Neumann

unread,
Nov 20, 2014, 4:18:01 PM11/20/14
to vcap...@cloudfoundry.org
Hi all,


I was thinking how could Varnish be setup in a Cloud Foundry environment.
I guess a typical Varnish setup would be something like this:

                  |----> app instance 1 |
---> varnish (*)  |                     |------> services
                  |----> app instance 2 |  


(*) Maybe having a cold varnish near, so if the main one fails, the secondary can take over


Where varnish main role is to:

1.- Cache request, so repetitive ones don't need to be process again by the app/services
2.- Load balance the requests.


In CF, gorouter does the load balancing to the apps, but it does not provide cache features.
I can think if the following way to use Varnish in CF:

A.- In a buildpack.
    Downside: If you scale your app to 3, you would end with 3 varnishes. The hitrate would be lower than using just 1

B.- As another App (maybe using a 'executable buildpack' if one exists?)
    Like:   gorouter --> varnish app ---> gorouter --> app instance
    Downside: 
      - Requests that are not cached in Varnish, would need to pass gorouter twice (adding a little latency?)

C.- Having your own CF pass all requests to varnish before arriving to gorouter
     Downside: Some apps may not want to use varnish at all.


The best idea I like so far is B, but Varnish doesn't feel to be an 'app', more like a 'front service' or something like that.
Maybe things like 'request rate limiting' would be a similar use case (?)

Ive not deeply analysed the situation, but im curious what do you think is the best setup?
Maybe there is another alternative im not seen?

Thanks!!
--


__________________
pneu...@gmail.com
@killfil

Aristoteles Neto

unread,
Nov 20, 2014, 5:13:26 PM11/20/14
to vcap...@cloudfoundry.org
I’ve deployed your option C, and it works rather well.

You do have a point that some applications may not want to use Varnish. For those cases, we’ve configured it in such way that it respects headers, so we just instruct those to be set in the applications which means varnish is irrelevant.

Aristoteles Neto



--
You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAHoZ-Ax_-11HS6FgQ2MLghdgxcPsWh4qYD_ZX14xGQMsOPgo2g%40mail.gmail.com.

To unsubscribe from this group and stop receiving emails from it, send an email to vcap-dev+u...@cloudfoundry.org.

Etourneau Gwenn

unread,
Nov 21, 2014, 8:14:27 AM11/21/14
to vcap...@cloudfoundry.org
Hi I think B is more flexible especially if hou have tons of users, each lf them can have specific requirements.

You just add an extra hops but should be fine, but better solution can be an apps listening for register message on nats and refreshing varnish conf with dea:port for your apps. You can avoid going back to the retour again.

Gwenn

Daniel Mikusa

unread,
Nov 21, 2014, 9:01:03 AM11/21/14
to vcap...@cloudfoundry.org
On Thu, Nov 20, 2014 at 4:17 PM, Phillip Neumann <pneu...@gmail.com> wrote:
Hi all,


I was thinking how could Varnish be setup in a Cloud Foundry environment.
I guess a typical Varnish setup would be something like this:

                  |----> app instance 1 |
---> varnish (*)  |                     |------> services
                  |----> app instance 2 |  


(*) Maybe having a cold varnish near, so if the main one fails, the secondary can take over


Where varnish main role is to:

1.- Cache request, so repetitive ones don't need to be process again by the app/services
2.- Load balance the requests.


In CF, gorouter does the load balancing to the apps, but it does not provide cache features.
I can think if the following way to use Varnish in CF:

A.- In a buildpack.
    Downside: If you scale your app to 3, you would end with 3 varnishes. The hitrate would be lower than using just 1

I've see this used before with the PHP build pack I maintain.  From what I heard, it worked out well.  I would agree with your downside and add that it will also consume more memory in the application, which could be a downside too.
 

B.- As another App (maybe using a 'executable buildpack' if one exists?)
    Like:   gorouter --> varnish app ---> gorouter --> app instance
    Downside: 
      - Requests that are not cached in Varnish, would need to pass gorouter twice (adding a little latency?)

This is probably the easiest because it's just another app on CF.  You could push it with something like the Null Buildpack[1], you'd get all the reliability of an app deployed to CF and it would have a separate memory allocation.

 

C.- Having your own CF pass all requests to varnish before arriving to gorouter
     Downside: Some apps may not want to use varnish at all.

Also seems like a fine option, but the downside to me is that it would require management and servers / vms to run on.  As far as the apps that don't want to use varnish, you could setup a second LB for those.  You can have multiple things point to the gorouter, you'd just need to have DNS setup to direct the apps accordingly.

Ex:

          LB   -----|
                        ------  gorouter ---> app instance
   Varnish ------|

Dan




The best idea I like so far is B, but Varnish doesn't feel to be an 'app', more like a 'front service' or something like that.
Maybe things like 'request rate limiting' would be a similar use case (?)

Ive not deeply analysed the situation, but im curious what do you think is the best setup?
Maybe there is another alternative im not seen?

Thanks!!
--


__________________
pneu...@gmail.com
@killfil

--

Phillip Neumann

unread,
Nov 21, 2014, 3:36:11 PM11/21/14
to vcap...@cloudfoundry.org
Hi,

Thanks for the feedback!

Certainly having the app behind varnish telling to not cache the responses, will make varnish 'irrelevant'.

Etournau, so it is possible to an app listen to the CF nats messages of other apps, of the same organisation?
Greate that way you can point the varnish right directly into the dea port :)


What i was worried about option A and C, is the multi-tenancy. Normally one wish to purge some thing, and probably would be a good idea to have some kind of ACL on that kind of actions, etc

It looks like if one want make CF manage the varnish B would be a great option.
Else, one could still put varnish(es) in front of the router, and make the DNS trick Daniel suggested :)


I was begining to think that having the posibility to bind an app to a 'front' service would be a good idea, like:
cf bind-front-service myapp varnish1G

Making gorouter pass the request to the varnish1G service instead of directly to the dea port*, thus enabling that kind of services. But it looks users can do exactly the with option B and Etournau tip, and not worring about mult-tenancy at all.. :)

Thanks!!



To unsubscribe from this group and stop receiving emails from it, send an email to vcap-dev+u...@cloudfoundry.org.



--


__________________
pneu...@gmail.com
@killfil

James Bayer

unread,
Nov 21, 2014, 4:52:43 PM11/21/14
to vcap...@cloudfoundry.org
we are considering a point of extensibility for the router that may enable certain routes/apps to indicate that they should pass through a pluggable proxy before sending things on to the app instance.

we're still working through multiple sets of ideas about this, but as we get more aligned and have some concrete ideas to share, we'll do that.


To unsubscribe from this group and stop receiving emails from it, send an email to vcap-dev+u...@cloudfoundry.org.



--
Thank you,

James Bayer

Phillip Neumann

unread,
Nov 21, 2014, 5:32:34 PM11/21/14
to vcap...@cloudfoundry.org
Reply all
Reply to author
Forward
0 new messages