Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Slaving from DNS masters behind LVS

107 views
Skip to first unread message

Nick Urbanik

unread,
Feb 12, 2013, 10:00:27 PM2/12/13
to keepaliv...@lists.sourceforge.net, bind-...@lists.isc.org
Dear Folks,

We have a pair of DNS servers running BIND behind a direct routing LVS
director pair running keepalived. Let's call these two DNS servers A
and B, and the VIP V.

They slave from a hidden master; let's call it M.

I want to allow another machine S to slave from A and B, the pair of
DNS servers that are behind LVS.

Another machine F will forward to the DNS servers behind the load
balancer, A and B.

[There is another similar setup at another location, so there will
be a V1 and V2, A1, A2, B1, B2; all of A1, A2, B1, B2 slave from M.]

1. Should the machine in the SOA be V, or A or B?
2. Should the NS records for the zones be A, B and V, or just V?
3, Should S slave from A and B, or should it slave from V?
4. Should F forward to V, or to both A and B?
--
Nick Urbanik http://nicku.org 808-71011 nick.u...@optusnet.com.au
GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24
I disclaim, therefore I am.

Mike Hoskins (michoski)

unread,
Feb 12, 2013, 11:15:26 PM2/12/13
to bind-...@lists.isc.org
Note: Removing cross-post, but feel free to forward.

-----Original Message-----

From: Nick Urbanik <nick.u...@optusnet.com.au>
Date: Tuesday, February 12, 2013 10:00 PM
To: "keepaliv...@lists.sourceforge.net"
<keepaliv...@lists.sourceforge.net>, "bind-...@lists.isc.org"
<bind-...@lists.isc.org>
Subject: Slaving from DNS masters behind LVS

>Dear Folks,
>
>We have a pair of DNS servers running BIND behind a direct routing LVS
>director pair running keepalived. Let's call these two DNS servers A
>and B, and the VIP V.

We run a similar setup, so I'm looking forward to hearing the community's
answers. My views below.

>They slave from a hidden master; let's call it M.
>
>I want to allow another machine S to slave from A and B, the pair of
>DNS servers that are behind LVS.
>
>Another machine F will forward to the DNS servers behind the load
>balancer, A and B.
>
>[There is another similar setup at another location, so there will
>be a V1 and V2, A1, A2, B1, B2; all of A1, A2, B1, B2 slave from M.]
>
>1. Should the machine in the SOA be V, or A or B?

I would use V.

Some will argue M if you are doing things like DDNS with DHCP...though
that's not clear here. Even if you are, it should not require using M
with the right configuration. I never publish my hidden master name in
public records.

>2. Should the NS records for the zones be A, B and V, or just V?

I think it depends on what you are trying to accomplish.

>From a Murhpy's Law perspective, where the VIP could go down (or need to
be taken down for maintenance), if the real servers are reachable by
clients in this case...listing A and B would be useful.

However you might accomplish the same thing with multiple VIPs hosted on
separate LVS clusters pointing to different sets of real servers, where
you only list V, V', etc. This is similar to what we do.

If you really don't want any queries directed to the real servers
themselves (or network topology prevents this), then you would only list V.

>3, Should S slave from A and B, or should it slave from V?

Either way you achieve the primary goal of HA, via VIP or masters {}. If
you use the VIP, you need to consider how much you care about the VIP
going down (maybe you don't if your expire time is high). If you use
masters, you need to consider how often you add new servers and require
updates to your configuration.

>4. Should F forward to V, or to both A and B?

I would actually setup a couple VIPs in cases like this, and use those as
my forwarders, resolv.conf entries, etc. If a DNS resolver tries a given
VIP, which gets a timeout from one real server, odd things might happen if
the client can't fail-over to a second VIP (it's retry logic will be tied
to the VIP address irrespective of # real servers). Edge case for sure,
but something to consider when load balancing DNS.

WBr...@e1b.org

unread,
Feb 13, 2013, 8:11:07 AM2/13/13
to Nick Urbanik, bind-...@lists.isc.org
Nick wrote on 02/12/2013 10:00:27 PM:

> We have a pair of DNS servers running BIND behind a direct routing LVS
> director pair running keepalived. Let's call these two DNS servers A
> and B, and the VIP V.

Several years ago I was lucky enough to take the ISC class on bind. One of
my questions going into the class was about using a load balancer in front
to our name servers. We have two VMs for internal resolution and two more
for external.

The instructor said not to use a load balancer as the DNS protocol had the
resilience to handle a server going down and the load balancer adds to the
complexity of troubleshooting problems. We had never had a problem with
either BIND crashing or network issues making them all unavailable, so the
load balancer was really a solution looking for a problem.

Recently, we had to take the slave name servers (1 internal, 1 external)
down to move the VMs to a different storage pool. There were no issues
with everyone continuing to use the masters only.

My current goals are to restructure our DNS, but load balancing is not in
the future here.

--

Bill




Confidentiality Notice:
This electronic message and any attachments may contain confidential or
privileged information, and is intended only for the individual or entity
identified above as the addressee. If you are not the addressee (or the
employee or agent responsible to deliver it to the addressee), or if this
message has been addressed to you in error, you are hereby notified that
you may not copy, forward, disclose or use any part of this message or any
attachments. Please notify the sender immediately by return e-mail or
telephone and delete this message from your system.

Nick Urbanik

unread,
Feb 13, 2013, 9:30:01 AM2/13/13
to WBr...@e1b.org, bind-...@lists.isc.org
Dear WBrown,

Thank you for your helpful reply.

On 13/02/13 08:11 -0500, WBr...@e1b.org wrote:
>Nick wrote on 02/12/2013 10:00:27 PM:
>
>> We have a pair of DNS servers running BIND behind a direct routing LVS
>> director pair running keepalived. Let's call these two DNS servers A
>> and B, and the VIP V.
>
>Several years ago I was lucky enough to take the ISC class on bind.

Jealous!

>One of my questions going into the class was about using a load
>balancer in front to our name servers. We have two VMs for internal
>resolution and two more for external.
>
>The instructor said not to use a load balancer as the DNS protocol had the
>resilience to handle a server going down and the load balancer adds to the
>complexity of troubleshooting problems. We had never had a problem with
>either BIND crashing or network issues making them all unavailable, so the
>load balancer was really a solution looking for a problem.
>
>Recently, we had to take the slave name servers (1 internal, 1 external)
>down to move the VMs to a different storage pool. There were no issues
>with everyone continuing to use the masters only.
>
>My current goals are to restructure our DNS, but load balancing is not in
>the future here.

I think that it is not necessarily always true that you should avoid a
load balancer. Every day, our DNS caches are answering about 140,000
queries per second. I think that it is rather hard to configure
resolvers to query only three machines yet still meet the demand
unless you either use very massive, expensive machines, or use load
balancers.

So the questions remain.

Phil Mayers

unread,
Feb 13, 2013, 10:11:32 AM2/13/13
to bind-...@lists.isc.org
On 13/02/13 14:30, Nick Urbanik wrote:

>
> I think that it is not necessarily always true that you should avoid a
> load balancer. Every day, our DNS caches are answering about 140,000
> queries per second. I think that it is rather hard to configure
> resolvers to query only three machines yet still meet the demand
> unless you either use very massive, expensive machines, or use load
> balancers.
>
> So the questions remain.

My rule of thumb is this:

1. For client->DNS comms (resolv.conf, DHCP-supplied DNS IPs, etc.) I
use a VIP. This allows for future scalability and adds/moves/changes
without time-consuming reconfiguring of clients, and avoid the problem
where some clients have poor/slow failover between DNS servers (unix
systems without nscd/lwresd).

2. For DNS->DNS comms I use real IPs. This includes "forwarders", NS
records, "masters" statements and so on. The rationale is that DNS
servers, when talking to other DNS servers, almost universally have
fast, intelligent detection of failures, and thus don't need the benefit
of a VIP.

However - as with all things, "it depends". There are circumstances
where VIPs (possibly only backed by one real server) are suitable for
DNS->DNS, and real IPs for client->DNS (e.g. resolv.conf on the DNS
server itself).

There's no one definitively "right" answer, since it depends on what
you're trying to achieve, and what architecture your network and
supporting systems have.

Tony Finch

unread,
Feb 13, 2013, 10:34:21 AM2/13/13
to Nick Urbanik, bind-...@lists.isc.org
Nick Urbanik <nick.u...@optusnet.com.au> wrote:
>
> I think that it is not necessarily always true that you should avoid a
> load balancer. Every day, our DNS caches are answering about 140,000
> queries per second. I think that it is rather hard to configure
> resolvers to query only three machines yet still meet the demand
> unless you either use very massive, expensive machines, or use load
> balancers.

Another option is to use anycast.
http://www.nanog.org/meetings/nanog29/abstracts.php?pt=NjcxJm5hbm9nMjk=

Tony.
--
f.anthony.n.finch <d...@dotat.at> http://dotat.at/
Forties, Cromarty: East, veering southeast, 4 or 5, occasionally 6 at first.
Rough, becoming slight or moderate. Showers, rain at first. Moderate or good,
occasionally poor at first.

Phil Mayers

unread,
Feb 13, 2013, 10:40:32 AM2/13/13
to bind-...@lists.isc.org
On 13/02/13 15:34, Tony Finch wrote:
> Nick Urbanik <nick.u...@optusnet.com.au> wrote:
>>
>> I think that it is not necessarily always true that you should avoid a
>> load balancer. Every day, our DNS caches are answering about 140,000
>> queries per second. I think that it is rather hard to configure
>> resolvers to query only three machines yet still meet the demand
>> unless you either use very massive, expensive machines, or use load
>> balancers.
>
> Another option is to use anycast.
> http://www.nanog.org/meetings/nanog29/abstracts.php?pt=NjcxJm5hbm9nMjk=

In fact, you can do both. Our recursive DNS server is accessible via two
IPs - one virtual IP, hosted on a load-balancer, and one anycast IP
advertised conditionally (on port 53 being open locally) using BGP from
each DNS server. This means you've got some diversity.

Chris Buxton

unread,
Feb 13, 2013, 12:54:13 PM2/13/13
to Nick Urbanik, keepaliv...@lists.sourceforge.net, bind-...@lists.isc.org
On Feb 12, 2013, at 7:00 PM, Nick Urbanik wrote:
> We have a pair of DNS servers running BIND behind a direct routing LVS
> director pair running keepalived. Let's call these two DNS servers A
> and B, and the VIP V.
>
> They slave from a hidden master; let's call it M.
>
> I want to allow another machine S to slave from A and B, the pair of
> DNS servers that are behind LVS.
>
> Another machine F will forward to the DNS servers behind the load
> balancer, A and B.
>
> [There is another similar setup at another location, so there will
> be a V1 and V2, A1, A2, B1, B2; all of A1, A2, B1, B2 slave from M.]
>
> 1. Should the machine in the SOA be V, or A or B?
> 2. Should the NS records for the zones be A, B and V, or just V?
> 3, Should S slave from A and B, or should it slave from V?
> 4. Should F forward to V, or to both A and B?

Generally speaking, if you're going to use a load balancer, use it. Don't go around it. I assume your VIP will actually float between two load balancers, for redundancy.

Why is forwarding involved? Forwarding is a recursive server behavior, but your other questions relate to authoritative service. Mixing the two, especially in a high-traffic environment, is a recipe for disaster. (Not that I haven't implemented that for even very large customers -- the customer is always right unless you can convince them otherwise. Use of multiple views, with match-recursive-only enabled in one of them, can somewhat alleviate the problem.)

1. Your choice. Mine would be M. My second choice would be either V1 or V2, if there was some need to truly conceal the identity of M.
2. V1 and V2.
3. V1 and V2.
4. V1 and V2.

But as others have pointed out, unless you're getting huge numbers of queries, I wouldn't bother with load balancers for authoritative service. I would only start looking for this type of solution if 6 individual name servers were insufficient to handle the load. And in that case, my first choice would be anycast, because that also gives you geographic redundancy, routing redundancy, etc. That's how the root server clusters are set up, for the most part.

For recursive service, where clients can't be relied upon to effectively use any server beyond the first one they query, load balancers make good sense. But in that case, you (ideally) shouldn't have any zones configured on the name servers other than (possibly) RPZs, stub zones, and (if you really must) conditional forwarding zones.

Chris Buxton
BlueCat Networks
0 new messages