Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

50 million records under one domain using Bind

497 views
Skip to first unread message

Vinay Y S

unread,
Dec 13, 2008, 6:39:57 AM12/13/08
to
Hi,
I am studying the scalability and performance characteristics of
different DNS servers. Goal is to find the best suitable server to
host a single domain with 50 million records. I am planning to install
Fedora 10 x86_64 on a 32GB RAM machine and use the Bind that comes
with it for this experiment.

If you have any suggestions or comments regarding how to accomplish
this with Bind, it would be greatly helpful.

Specifically, I would like to know what build or config options I
would have to tweak to make it work best for this scale.

Also, is there any known deployments of bind of this scale out there?

Thanks,
--
Vinay Y S
p.s: Where do you guys hang out? Any IRC channel for bind users/developers?
_______________________________________________
bind-users mailing list
bind-...@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Matus UHLAR - fantomas

unread,
Dec 13, 2008, 7:55:46 AM12/13/08
to
On 13.12.08 17:09, Vinay Y S wrote:
> I am studying the scalability and performance characteristics of
> different DNS servers. Goal is to find the best suitable server to
> host a single domain with 50 million records. I am planning to install
> Fedora 10 x86_64 on a 32GB RAM machine and use the Bind that comes
> with it for this experiment.

what kind of records do you want to store?

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Chernobyl was an Windows 95 beta test site.

Vinay Y S

unread,
Dec 13, 2008, 8:31:09 AM12/13/08
to
2008/12/13 Matus UHLAR - fantomas <uh...@fantomas.sk>:

> On 13.12.08 17:09, Vinay Y S wrote:
>> I am studying the scalability and performance characteristics of
>> different DNS servers. Goal is to find the best suitable server to
>> host a single domain with 50 million records. I am planning to install
>> Fedora 10 x86_64 on a 32GB RAM machine and use the Bind that comes
>> with it for this experiment.
>
> what kind of records do you want to store?
Mostly A, CNAME, MX and TXT records.

--
Vinay Y S

Matus UHLAR - fantomas

unread,
Dec 13, 2008, 8:38:27 AM12/13/08
to
> > On 13.12.08 17:09, Vinay Y S wrote:
> >> I am studying the scalability and performance characteristics of
> >> different DNS servers. Goal is to find the best suitable server to
> >> host a single domain with 50 million records. I am planning to install
> >> Fedora 10 x86_64 on a 32GB RAM machine and use the Bind that comes
> >> with it for this experiment.

> 2008/12/13 Matus UHLAR - fantomas <uh...@fantomas.sk>:
> > what kind of records do you want to store?

On 13.12.08 19:01, Vinay Y S wrote:
> Mostly A, CNAME, MX and TXT records.

so they're generic DNS data, nothing special like RBL ?


--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.

Windows found: (R)emove, (E)rase, (D)elete

Vinay Y S

unread,
Dec 13, 2008, 12:42:37 PM12/13/08
to
2008/12/13 Matus UHLAR - fantomas <uh...@fantomas.sk>:
>> > On 13.12.08 17:09, Vinay Y S wrote:
>> >> I am studying the scalability and performance characteristics of
>> >> different DNS servers. Goal is to find the best suitable server to
>> >> host a single domain with 50 million records. I am planning to install
>> >> Fedora 10 x86_64 on a 32GB RAM machine and use the Bind that comes
>> >> with it for this experiment.
>
>> 2008/12/13 Matus UHLAR - fantomas <uh...@fantomas.sk>:
>> > what kind of records do you want to store?
>
> On 13.12.08 19:01, Vinay Y S wrote:
>> Mostly A, CNAME, MX and TXT records.
>
> so they're generic DNS data, nothing special like RBL ?

The record names and values could be any valid labels. All the record
names I plan to use for tests are of form sub.domain.tld and values
are IP addresses for A record and other suitable values for other
record types. Would the nature of record types and values have
significant effect on the result of this experiment?

--
Vinay Y S

Matus UHLAR - fantomas

unread,
Dec 14, 2008, 7:42:10 AM12/14/08
to
On 13.12.08 23:12, Vinay Y S wrote:
> The record names and values could be any valid labels. All the record
> names I plan to use for tests are of form sub.domain.tld and values
> are IP addresses for A record and other suitable values for other
> record types. Would the nature of record types and values have
> significant effect on the result of this experiment?

for example, rbldnsd supports only a few types of records, but can store
them very effectively, e.g. IP addresses.

For all types of DNS records and values, it's apparently not useful


--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.

WinError #98652: Operation completed successfully.

Stephane Bortzmeyer

unread,
Dec 14, 2008, 8:06:05 AM12/14/08
to
On Sat, Dec 13, 2008 at 05:09:57PM +0530,
Vinay Y S <vi...@vys.in> wrote
a message of 23 lines which said:

> Also, is there any known deployments of bind of this scale out there?

Half of the ".de" name servers are BIND and ".de" has 12 millions of
domains, which probably means close to 50 millions of records.

Robert

unread,
Dec 14, 2008, 1:01:16 PM12/14/08
to
On Sun, 14 Dec 2008 14:06:05 +0100, Stephane Bortzmeyer wrote:

> On Sat, Dec 13, 2008 at 05:09:57PM +0530,
> Vinay Y S <vi...@vys.in> wrote
> a message of 23 lines which said:
>
>> Also, is there any known deployments of bind of this scale out there?
>
> Half of the ".de" name servers are BIND and ".de" has 12 millions of
> domains, which probably means close to 50 millions of records.

I believe he is talking on one server not spread out over several
servers. I think he is trying to see the limit on one server as to how
many records it could serve reliably.


--

Regards
Robert

Linux User #296285
http://counter.li.org

Gregory Hicks

unread,
Dec 14, 2008, 9:36:43 PM12/14/08
to

> From: Robert <no...@noplace.nowhere>
> Date: Sun, 14 Dec 2008 13:01:16 -0500

>
> On Sun, 14 Dec 2008 14:06:05 +0100, Stephane Bortzmeyer wrote:
>
> > On Sat, Dec 13, 2008 at 05:09:57PM +0530,
> > Vinay Y S <vi...@vys.in> wrote
> > a message of 23 lines which said:
> >
> >> Also, is there any known deployments of bind of this scale out there?
> >
> > Half of the ".de" name servers are BIND and ".de" has 12 millions of
> > domains, which probably means close to 50 millions of records.
>
> I believe he is talking on one server not spread out over several
> servers. I think he is trying to see the limit on one server as to how
> many records it could serve reliably.

I believe that the limiting factor is not going to be the size of the
database, but how fast the machine can process network requests. Ie,
how many queries per second; If the machine can only handle 10k
queries per second, then the MOST it will see is 10k qps even if 11k
qps are coming in.

Regards,
GRegory Hicks

---------------------------------------------------------------------
Gregory Hicks | Principal Systems Engineer
| Direct: 408.569.7928

People sleep peaceably in their beds at night only because rough men
stand ready to do violence on their behalf -- George Orwell

The price of freedom is eternal vigilance. -- Thomas Jefferson

"The best we can hope for concerning the people at large is that they
be properly armed." --Alexander Hamilton

JINMEI Tatuya / 神明達哉

unread,
Dec 15, 2008, 2:37:50 PM12/15/08
to
At Sat, 13 Dec 2008 17:09:57 +0530,

"Vinay Y S" <vi...@vys.in> wrote:

> I am studying the scalability and performance characteristics of
> different DNS servers. Goal is to find the best suitable server to
> host a single domain with 50 million records. I am planning to install
> Fedora 10 x86_64 on a 32GB RAM machine and use the Bind that comes
> with it for this experiment.
>

> If you have any suggestions or comments regarding how to accomplish
> this with Bind, it would be greatly helpful.
>
> Specifically, I would like to know what build or config options I
> would have to tweak to make it work best for this scale.

If you plan to use a plain zone file for the 50 million records,
rather than using a separate backend database, you may want to
precompile your zone file by named-compilezone. It will make load
time twice as short as it is with the plain text format.

---
JINMEI, Tatuya
Internet Systems Consortium, Inc.

Scott Haneda

unread,
Dec 15, 2008, 4:03:47 PM12/15/08
to
T3V0IG9mIGN1cmlvc2l0eSwgaWYgb25lIHpvbmUgaXMgdG8gaG9sZCA1MCBtaWxsaW9uIHJlY29y
ZHMsIHdoYXQgIAp3b3VsZCB0aGV5IGFsbCBiZSBmb3I/IEkgY2FuJ3QgZXZlbiBpbWFnaW5lIGJs
b2dzcG90IG9yIGdvZGFkZHkgYmVpbmcgIAppbiB0aGF0IGxlYWd1ZS4KClBlcmhhcHMgd2l0aCB0
aGlzIG1hbnkgcmVjb3JkcyBqdXN0IHVzaW5nIGEgd2xkY2FyZCB3b3VsZCBiZSBzaW1wbGVyPwoK
VGhlbiBhZ2FpbiBtYXliZSB0aGlzIGlzIGEgbmV3IHRsZCwgb3Igb2xkIG9uZSBiZWluZyBjb25z
b2xpZGF0ZWQ/CgotLQpTY290dApJcGhvbmUgc2F5cyBoZWxsby4KCk9uIERlYyAxNSwgMjAwOCwg
YXQgMTE6MzcgQU0sIEpJTk1FSSBUYXR1eWEgLyDnpZ7mmI7pgZTlk4kgPEppbm1laV9UYXR1eWEg
CkBpc2Mub3JnPiB3cm90ZToKCj4gQXQgU2F0LCAxMyBEZWMgMjAwOCAxNzowOTo1NyArMDUzMCwK
PiAiVmluYXkgWSBTIiA8dmluYXlAdnlzLmluPiB3cm90ZToKPgo+PiBJIGFtIHN0dWR5aW5nIHRo
ZSBzY2FsYWJpbGl0eSBhbmQgcGVyZm9ybWFuY2UgY2hhcmFjdGVyaXN0aWNzIG9mCj4+IGRpZmZl
cmVudCBETlMgc2VydmVycy4gR29hbCBpcyB0byBmaW5kIHRoZSBiZXN0IHN1aXRhYmxlIHNlcnZl
ciB0bwo+PiBob3N0IGEgc2luZ2xlIGRvbWFpbiB3aXRoIDUwIG1pbGxpb24gcmVjb3Jkcy4gSSBh
bSBwbGFubmluZyB0byAgCj4+IGluc3RhbGwKPj4gRmVkb3JhIDEwIHg4Nl82NCBvbiBhIDMyR0Ig
UkFNIG1hY2hpbmUgYW5kIHVzZSB0aGUgQmluZCB0aGF0IGNvbWVzCj4+IHdpdGggaXQgZm9yIHRo
aXMgZXhwZXJpbWVudC4KPj4KPj4gSWYgeW91IGhhdmUgYW55IHN1Z2dlc3Rpb25zIG9yIGNvbW1l
bnRzIHJlZ2FyZGluZyBob3cgdG8gYWNjb21wbGlzaAo+PiB0aGlzIHdpdGggQmluZCwgaXQgd291
bGQgYmUgZ3JlYXRseSBoZWxwZnVsLgo+Pgo+PiBTcGVjaWZpY2FsbHksIEkgd291bGQgbGlrZSB0
byBrbm93IHdoYXQgYnVpbGQgb3IgY29uZmlnIG9wdGlvbnMgSQo+PiB3b3VsZCBoYXZlIHRvIHR3
ZWFrIHRvIG1ha2UgaXQgd29yayBiZXN0IGZvciB0aGlzIHNjYWxlLgo+Cj4gSWYgeW91IHBsYW4g
dG8gdXNlIGEgcGxhaW4gem9uZSBmaWxlIGZvciB0aGUgNTAgbWlsbGlvbiByZWNvcmRzLAo+IHJh
dGhlciB0aGFuIHVzaW5nIGEgc2VwYXJhdGUgYmFja2VuZCBkYXRhYmFzZSwgeW91IG1heSB3YW50
IHRvCj4gcHJlY29tcGlsZSB5b3VyIHpvbmUgZmlsZSBieSBuYW1lZC1jb21waWxlem9uZS4gIEl0
IHdpbGwgbWFrZSBsb2FkCj4gdGltZSB0d2ljZSBhcyBzaG9ydCBhcyBpdCBpcyB3aXRoIHRoZSBw
bGFpbiB0ZXh0IGZvcm1hdC4KPgo+IC0tLQo+IEpJTk1FSSwgVGF0dXlhCj4gSW50ZXJuZXQgU3lz
dGVtcyBDb25zb3J0aXVtLCBJbmMuCj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KPiBiaW5kLXVzZXJzIG1haWxpbmcgbGlzdAo+IGJpbmQtdXNlcnNAbGlz
dHMuaXNjLm9yZwo+IGh0dHBzOi8vbGlzdHMuaXNjLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2JpbmQt
dXNlcnMKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KYmlu
ZC11c2VycyBtYWlsaW5nIGxpc3QKYmluZC11c2Vyc0BsaXN0cy5pc2Mub3JnCmh0dHBzOi8vbGlz
dHMuaXNjLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2JpbmQtdXNlcnM=

Vinay Y S

unread,
Dec 18, 2008, 11:23:51 AM12/18/08
to
>> I believe he is talking on one server not spread out over several
>> servers. I think he is trying to see the limit on one server as to how
>> many records it could serve reliably.

Can the records of a single domain be spread across multiple machines
(sharding?) using bind?

> I believe that the limiting factor is not going to be the size of the
> database, but how fast the machine can process network requests. Ie,
> how many queries per second; If the machine can only handle 10k
> queries per second, then the MOST it will see is 10k qps even if 11k
> qps are coming in.

Is there any good tool to benchmark this metric? Upon searching on
Internet, I've found queryperf so far which I'll try.

--
Vinay Y S

Vinay Y S

unread,
Dec 18, 2008, 11:31:37 AM12/18/08
to
> If you plan to use a plain zone file for the 50 million records,
> rather than using a separate backend database, you may want to

What are the backend database options available? Is bind-sdb active
developed and is it production ready?

> precompile your zone file by named-compilezone. It will make load
> time twice as short as it is with the plain text format.

Thanks for the tip. I'll give it a shot. Currently text file with 50
million records is taking 10 minutes to load on a machine with 16GB
RAM and dual quad-core processors.

JINMEI Tatuya / 神明達哉

unread,
Dec 18, 2008, 3:08:49 PM12/18/08
to
At Thu, 18 Dec 2008 22:01:37 +0530,

"Vinay Y S" <vi...@vys.in> wrote:

> > If you plan to use a plain zone file for the 50 million records,
> > rather than using a separate backend database, you may want to
>
> What are the backend database options available? Is bind-sdb active
> developed and is it production ready?

Check DLZ. I don't know much about it, and can't provide specific
answers. I'm sure some others in this list can.

---
JINMEI, Tatuya
Internet Systems Consortium, Inc.

Andrew Ferk

unread,
Dec 30, 2008, 1:27:29 AM12/30/08
to
> What are the backend database options available? Is bind-sdb active
> developed and is it production ready?

You can use mysql with dlz. I have yet to get it successfully
working, but that's another issue.

One of the reasons I wanted to use a database was for the speed
increase. I would probably look into using dlz.

Maybe someone has a better solution, in which case, I will probably try myself.

David Ford

unread,
Dec 30, 2008, 1:35:28 AM12/30/08
to
I use DLZ w/ postgres. It's been working pretty good for me for a while
now.

-david

Andrew Ferk wrote:
>> What are the backend database options available? Is bind-sdb active
>> developed and is it production ready?
>>
>
> You can use mysql with dlz. I have yet to get it successfully
> working, but that's another issue.
>
> One of the reasons I wanted to use a database was for the speed
> increase. I would probably look into using dlz.

_______________________________________________

Scott Baker

unread,
Dec 30, 2008, 2:36:16 AM12/30/08
to
Andrew Ferk wrote:
>> What are the backend database options available? Is bind-sdb active
>> developed and is it production ready?
>
> You can use mysql with dlz. I have yet to get it successfully
> working, but that's another issue.
>
> One of the reasons I wanted to use a database was for the speed
> increase. I would probably look into using dlz.
>
> Maybe someone has a better solution, in which case, I will probably try myself.

Just out of curiosity, what real world scenario do you have 50 million
records under one domain?

- Scott

David Ford

unread,
Dec 30, 2008, 2:40:36 AM12/30/08
to
I don't. I have a working DLZ setup.

Scott Baker wrote:
> Just out of curiosity, what real world scenario do you have 50 million
> records under one domain?
>

_______________________________________________

Bill Larson

unread,
Dec 30, 2008, 6:39:32 AM12/30/08
to
On Dec 29, 2008, at 11:35 PM, David Ford wrote:

> I use DLZ w/ postgres. It's been working pretty good for me for a
> while
> now.

Another "just out of curiosity" question. What sort of performance do
you see with BIND/DLZ/Postgres?

The http://bind-dlz.sourceforge.net/ site has some BIND-DLZ
performance test results listed. I don't know what version of BIND-9
they were using and I'm sure it is not current. With straight BIND-9
they were seeing 16,000 QPS, a reasonable number. With the Postgres
DLZ they saw less than 600 QPS. I'm sure that this performance can be
improved with fast hardware and (hopefully) a newer version of BIND.

With 50 million records, it would take about one day to perform a
single query for each of these records with the server doing nothing
else. It doesn't appear to me that you could serve this many records
using BIND-DLZ with Postgres in any environment that actually uses all
50 million RRs. Then again, at 16000 QPS, it would still take about
an hour to perform a single query for each of these 50 million records.

Granted, the startup/reload speed increase using DLZ will be
impressive, what I am questioning is having 50 million DNS resource
records on any DNS system. Is DNS an appropriate "database" for
storing 50 million records?

Bill Larson

> -david


>
> Andrew Ferk wrote:
>>> What are the backend database options available? Is bind-sdb active
>>> developed and is it production ready?
>>>
>>
>> You can use mysql with dlz. I have yet to get it successfully
>> working, but that's another issue.
>>
>> One of the reasons I wanted to use a database was for the speed
>> increase. I would probably look into using dlz.
>

David Ford

unread,
Dec 30, 2008, 1:25:59 PM12/30/08
to
I don't suggest using a "heavy" DB back end such as SQL for 50M records
without thought. Each DNS query might do several SQL lookups depending
on the type of query and number of hostname components. Factor in a
mail server and the number of hits becomes a dozen for one instance.
I.e. a.b.c.def.com will get forward lookups for each component and will
also get MX and PTR lookups. Toss in anti-spam and without caching
you're talking several dozen hits easily. For just one mail daemon.

I've never done a high load test. I have about 50 domains, three
nameservers, and about 10 servers that point at these three with no
concerns. The reason I wanted SQL as my back end was for the extreme
ease at doing immediately available updates and the ease of implementing
central web based management of the records. I did see that 16K/600 QPS
number before but that was several releases ago when DLZ was brand new.
I'm also of the opinion that a real DBA could improve significantly on
the query design for efficiency.

Again, SQL is rather heavy as a back end for DNS which really has little
to do with relational data. HBase is probably a much more efficient
approach as it is designed for huge volumes of non-relational data. A
front end cache is also likely to increase the QPS by an incredible
amount. The best reason I can offer to justify using DLZ is that you
can abstract the back end entirely from BIND itself. It can become
distributed, cached, profiled, managed in a variety of disparate means,
and accelerated without any modifications needed to BIND itself.

The only drawback to DLZ that I have encountered at present, is DNSSEC.
Not having a flat file to create a signature from is an issue. However
I haven't had the time to address this for a while now and I don't know
if the current releases of BIND have incorporated any thought to
handling DNSSEC for DLZ zones. Very few people use DLZ but I'm most
sure that a solution is or will be made soon.

-david

Bill Larson wrote:
> On Dec 29, 2008, at 11:35 PM, David Ford wrote:
>
>> I use DLZ w/ postgres. It's been working pretty good for me for a while
>> now.
>
> Another "just out of curiosity" question. What sort of performance do
> you see with BIND/DLZ/Postgres?
>
> The http://bind-dlz.sourceforge.net/ site has some BIND-DLZ
> performance test results listed. I don't know what version of BIND-9
> they were using and I'm sure it is not current. With straight BIND-9
> they were seeing 16,000 QPS, a reasonable number. With the Postgres
> DLZ they saw less than 600 QPS. I'm sure that this performance can be
> improved with fast hardware and (hopefully) a newer version of BIND.
>
> With 50 million records, it would take about one day to perform a
> single query for each of these records with the server doing nothing
> else. It doesn't appear to me that you could serve this many records
> using BIND-DLZ with Postgres in any environment that actually uses all
> 50 million RRs. Then again, at 16000 QPS, it would still take about
> an hour to perform a single query for each of these 50 million records.
>
> Granted, the startup/reload speed increase using DLZ will be
> impressive, what I am questioning is having 50 million DNS resource
> records on any DNS system. Is DNS an appropriate "database" for
> storing 50 million records?
>
> Bill Larson

_______________________________________________

0 new messages