How to disable cache in bind-9.6? ttl=0 - bad idea.
_______________________________________________
bind-users mailing list
bind-...@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users
if you know that setting TTL to 0 is a bad idea, why do yuo think that
disabling a cache in BIND is not a bad idea?
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Boost your system's speed by 500% - DEL C:\WINDOWS\*.*
Because under high load cache grows to maximum system size and stop
responding to queues. This is known problem.
> Matus UHLAR - fantomas wrote:
> > if you know that setting TTL to 0 is a bad idea, why do yuo think that
> > disabling a cache in BIND is not a bad idea?
On 20.01.09 18:39, Dmitry Rybin wrote:
> Because under high load cache grows to maximum system size and stop
> responding to queues. This is known problem.
Did you set up maximum cache size to a sane value?
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Enter any 12-digit prime number to continue.
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigE15C11F99F06B4E586B5EA6B
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Dmitry Rybin wrote:
> Matus UHLAR - fantomas wrote:
>> On 20.01.09 12:49, Dmitry Rybin wrote:
>>> How to disable cache in bind-9.6? ttl=3D0 - bad idea.
>> if you know that setting TTL to 0 is a bad idea, why do yuo think that=
>> disabling a cache in BIND is not a bad idea?
>>
>=20
> Because under high load cache grows to maximum system size and stop
> responding to queues. This is known problem.
This is NOT a "known problem" in 9.6. Please provide your configuration
and logs that show the issue that you are having.
AlanC
--------------enigE15C11F99F06B4E586B5EA6B
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iEYEARECAAYFAkl19LkACgkQcKpYUrUDCYeZcQCcDThIeJsbxcOUUHY3BMDDCJAP
xHIAnjq97+FShLc0hc8YYAZPAnJfNUQz
=12lE
-----END PGP SIGNATURE-----
--------------enigE15C11F99F06B4E586B5EA6B--
--===============3703644450656641216==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
_______________________________________________
bind-users mailing list
bind-...@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users
--===============3703644450656641216==--
TTL settings are part of authoritative zone data, which is
completely independent of whether you disable caching in the
nameserver.
On Jan 20, 2009, at 4:49 AM, Dmitry Rybin wrote:
> Hello!
>
> How to disable cache in bind-9.6? ttl=0 - bad idea.
On 20.01.09 14:44, John Wobus wrote:
> Disabling the cache makes sense if the purpose of your
> nameserver is to provide your authoritative zone data and you
> have a different nameserver to handle your site's general
> DNS queries.
in such case it's much better to disable recursion and not use such server
for resolution, unless it's a MUST (e.g. firewalls).
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Silvester Stallone: Father of the RISC concept.
this is known problem of all bind's. Bind grows up to 2Gb, become slowly
answer to queries and can't restart, only kill -9. FreeBSD 5.x....7.1,
Linux 2.6.
============
options {
directory "/etc/namedb";
max-cache-size 16M;
max-cache-ttl 3600;
max-ncache-ttl 1800;
cleaning-interval 10;
transfers-in 1000;
transfers-out 1000;
transfers-per-ns 100;
minimal-responses yes;
allow-recursion {
xxx.xxx.xxx.xxx;
};
recursive-clients 10000;
clients-per-query 80;
max-clients-per-query 100 ;
view "world" {
zone "." {
type hint;
file "named.root";
};
zone "0.0.127.IN-ADDR.ARPA" {
type master;
file "localhost.rev";
};
view "view0"{
max-cache-size 16M;
match-clients {
XXX.XXX.XXX.XXX;
};
include "net-views/view0.conf";
};
[... skip 48 views ...]
view "view50"{
max-cache-size 8M;
match-clients {
XXX.XXX.XXX.XXX;
};
include "net-views/view50.conf";
};
> >> Matus UHLAR - fantomas wrote:
> >>> if you know that setting TTL to 0 is a bad idea, why do yuo think that
> >>> disabling a cache in BIND is not a bad idea?
> > Dmitry Rybin wrote:
> >> Because under high load cache grows to maximum system size and stop
> >> responding to queues. This is known problem.
> Alan Clegg wrote:
> > This is NOT a "known problem" in 9.6. Please provide your configuration
> > and logs that show the issue that you are having.
On 21.01.09 12:10, Dmitry Rybin wrote:
> this is known problem of all bind's. Bind grows up to 2Gb, become slowly
> answer to queries and can't restart, only kill -9. FreeBSD 5.x....7.1,
> Linux 2.6.
This is _NOT_ a problem of BIND. This is a problem of its admin who can't
read the docs and set up max-cache-size, which does exactly what is needed
in this case.
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
It's now safe to throw off your computer.
>
> This is _NOT_ a problem of BIND. This is a problem of its admin who can't
> read the docs and set up max-cache-size, which does exactly what is needed
> in this case.
>
Hmm... And why bind allocate all system memory, if max-cache-size 16M?
And views... 50 views. 16*50=800M. Only 800M, this is not 3..4GB of
system memory.
+50 views of zone data + memory for 100000 clients + ....
You have a 32bit build which will give a maximum of 2G data.
You are just trying to cram too much into too small a place.
Mark
> _______________________________________________
> bind-users mailing list
> bind-...@lists.isc.org
> https://lists.isc.org/mailman/listinfo/bind-users
--
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: Mark_A...@isc.org
>>>
>> Hmm... And why bind allocate all system memory, if max-cache-size 16M?
>> And views... 50 views. 16*50=800M. Only 800M, this is not 3..4GB of
>> system memory.
>
> +50 views of zone data + memory for 100000 clients + ....
>
> You have a 32bit build which will give a maximum of 2G data.
>
> You are just trying to cram too much into too small a place.
OK. May be you can give any recomendation?
file /usr/local/sbin/named
/usr/local/sbin/named: ELF 64-bit LSB executable, x86-64, version 1
(FreeBSD), for FreeBSD 7.1 (701100), dynamically linked (uses shared
libs), FreeBSD-style, stripped
On 21.01.09 17:38, Dmitry Rybin wrote:
> Hmm... And why bind allocate all system memory, if max-cache-size 16M?
> And views... 50 views. 16*50=800M. Only 800M, this is not 3..4GB of
> system memory.
lower it down to e.g. 4-8MB to see if it helps a bit. But I'd think if 50
views is really needed here... and if you have 800 MB of cache and 4GB of
used memory, I'd say that size of the cache is not the real problem
btw is the max-cache-size really per-view?
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
You have the right to remain silent. Anything you say will be misquoted,
then used against you.
The way i read this you are using one view for each of the different
client IPs you have. Do you really need all of those or are you just
trying to have an internal and an external view for a range of clients?
The match-clients statement takes a list of IPs, CIDR ranges or ACLs.
Stefan
--
printk("%s: huh ? Who issued this format command ?\n")
linux-2.6.6/drivers/block/ps2esdi.c
Stefan
--
printk("CPU[%d]: Sending penguins to jail...",smp_processor_id());
linux-2.4.8/arch/sparc64/kernel/smp.c
> > +50 views of zone data + memory for 100000 clients + ....
> >
> > You have a 32bit build which will give a maximum of 2G data.
> >
> > You are just trying to cram too much into too small a place.
>
> OK. May be you can give any recomendation?
As Mark said, having 50 views, each of which contains non-negligible
amount of cache, is an excessive condition. Also, since the matching
view is identified by linear search for every query, it may also
impact your query processing performance. So, you'd primarily
consider reducing the number of views anyway.
Still, I noticed cache management may not work well (even with a
single view) especially when it's multi-threaded and configured with a
small max-cache-size such as 16MB. (It's ironical that using a small
max-cache-size could hinder cache cleaning, resulting in larger memory
footprints). I'm developing a fix to this problem. Can you try the
patch available at:
http://www.jinmei.org/patch/bind9-lrucache.diff
(should be cleanly applicable to 9.6).
and let me know if it mitigates the problem?
Other recommendations:
- I previously suggested using a separate cache-only view and forward
all recursive queries to that view. Have you tried that? If you
have, didn't it work as I hoped?
- BIND 9.7 will have a new option "attach-cache" exactly for such an
extraordinary operational environment as yours: it allows multiple
views to share a single cache to save memory.
---
JINMEI, Tatuya
Internet Systems Consortium, Inc.
>>> and let me know if it mitigates the problem?
>
> On 29.01.09 22:50, Dmitry Rybin wrote:
>> Oh, great work. I'll try tomorrow.
Named with JINMEI Tatuy patch:
max-cache-size 800M;
Morning Statistic
version: 9.6.0-P1
CPUs found: 8
worker threads: 8
number of zones: 1040
debug level: 0
xfers running: 0
xfers deferred: 0
soa queries in progress: 0
query logging is OFF
recursive clients: 167/4900/5000
tcp clients: 0/100
server is up and running
Started at Feb 3 00:51 (Now Feb 4 11:15:37) MSK
Startup mem: 890M
Cur. memory usage: 2534M
System limit: 16G
++ Incoming Requests ++
112510181 QUERY
550 IQUERY
42 STATUS
1043 RESERVED3
101299 NOTIFY
101 UPDATE
14 RESERVED11
++ Incoming Queries ++
1929 RESERVED0
75241540 A
2105214 NS
100 CNAME
276292 SOA
2490 WKS
26826476 PTR
2 HINFO
4690581 MX
236619 TXT
24 X25
2003829 AAAA
17 LOC
713837 SRV
46397 NAPTR
58 A6
1022 SPF
4 IXFR
5 AXFR
317561 ANY
23 Others
++ Outgoing Queries ++
>> Yes, I try it. But I can't set ttl to 0. It didn't work. Recursive query
>> fails, and authoritative query back to clients with ttl 0 :(
>
> Yes, that is what "Setting TTL to 0" means.
>
>> ~50 views,
>
> can't you really lower the views count?
>
It's impossible, :-( over 500'000 client use bind and we must use views
to split load on another services.
> Named with JINMEI Tatuy patch:
> max-cache-size 800M;
It's way too much, if this applies to all of the 50 views.
---
JINMEI, Tatuya
Internet Systems Consortium, Inc.
> Matus UHLAR - fantomas wrote:
> > can't you really lower the views count?
On 04.02.09 11:23, Dmitry Rybin wrote:
> It's impossible, :-( over 500'000 client use bind and we must use views
> to split load on another services.
Pardon? Split load? Do you use views to point different clients to different
server to lower load on them?
If so, you better should use DNS load balancing or some kind of HW/SW load
balancer
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
The early bird may get the worm, but the second mouse gets the cheese.
>
> On 04.02.09 11:23, Dmitry Rybin wrote:
>> It's impossible, :-( over 500'000 client use bind and we must use views
>> to split load on another services.
> > Named with JINMEI Tatuy patch:
> > max-cache-size 800M;
> It's way too much, if this applies to all of the 50 views.
Oh! I decrease memory to 16Mb.
>
> Pardon? Split load? Do you use views to point different clients to different
> server to lower load on them?
>
> If so, you better should use DNS load balancing or some kind of HW/SW load
> balancer
>
For first time was DNS load balancing. And after grow clients base we
can use only current scheme. We think about it, but only bind with
current configuration approach to us.
Another variant - powerdns-recursor with LUA scripting. (in testing)
No, I did not write that. Please don't break quoting.
> > Pardon? Split load? Do you use views to point different clients to different
> > server to lower load on them?
> >
> > If so, you better should use DNS load balancing or some kind of HW/SW load
> > balancer
> >
>
> For first time was DNS load balancing. And after grow clients base we
> can use only current scheme. We think about it, but only bind with
> current configuration approach to us.
Yes, but now it seems to reach its possibilities, so you should better think
about changing your architecture...
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Quantum mechanics: The dreams stuff is made of.
> >> max-cache-size 800M;
> >
> > It's way too much, if this applies to all of the 50 views.
>
> With you patch? Total memory on server 12Gb.
In case you are confused: this patch only makes named honor
max-cache-size (+ some possible margin) for each view, unlike the
unreleased new feature 'attach-cache'. Each of your 50 views can
still consume up to 800MB, so you need to expect at least 50 * 800MB
of memory footprint in the worst case.
---
JINMEI, Tatuya
Internet Systems Consortium, Inc.
> > > max-cache-size 800M;
>
> > It's way too much, if this applies to all of the 50 views.
>
> Oh! I decrease memory to 16Mb.
Okay, and according to this:
: Started at Feb 3 00:51 (Now Feb 4 11:15:37) MSK
: Startup mem: 890M
: Cur. memory usage: 2534M
the additional memory needed while running is 1644M (2534 - 890),
32.88M per view (if the #of view is 50). This seems to be a possible
situation, considering other memory overhead per view. If the memory
footprint is now stabilized at that point, I guess you're fine with
that, right? (and you could increase max-cache-size to, e.g., 64M).