Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

DNS Server DoS Attacks

5 views
Skip to first unread message

Alec H. Peterson

unread,
Nov 22, 2002, 8:23:08 PM11/22/02
to

These attacks seem to be on the rise. There were the well-publicized ones
on the root servers a few weeks ago, and yesterday UltraDNS was hit with
one. It occurs to me that these attacks have the potential to completely
shut down the DNS system if they are structured properly. I have a thought
on how to make the system more resiliant to authoritative servers getting
hammered, and was interested in some thoughts on it.

My thought is to add another TTL to DNS responses, similar to the SOA
maximum parameter. The current TTL would be similar to the SOA minimum.
This would still allow for records to expire in a reasonable amount of
time, but it would also allow for DNS responses to be answered in the event
that servers in the hierarchy are unreachable for some reason. It occurs
to me that it is possible to retrofit existing DNS servers to have a static
maximum timeout without any protocol modifications.

Anyway, the way I see it since DNS already has a caching infrastructure
built in it makes sense to take extra advantage of that infrastructure when
things are under attack.

Alec

--
Alec H. Peterson -- a...@hilander.com
Chief Technology Officer
Catbird Networks, http://www.catbird.com

--
to unsubscribe send a message to namedroppe...@ops.ietf.org with
the word 'unsubscribe' in a single line as the message text body.
archive: <http://ops.ietf.org/lists/namedroppers/>

Paul Vixie

unread,
Nov 22, 2002, 9:14:54 PM11/22/02
to
> My thought is to add another TTL to DNS responses, similar to the SOA
> maximum parameter. The current TTL would be similar to the SOA minimum.
> This would still allow for records to expire in a reasonable amount of
> time, but it would also allow for DNS responses to be answered in the event
> that servers in the hierarchy are unreachable for some reason.

keeping copies of somebody else's authoritative data and using or reusing
them beyond the (TTL,SOA) parameters would be a disaster. even with DNSSEC
it will be dangerous, but the amount of bad data that would circulate without
end under such a scheme is unthinkably horrible. (BIND has been criticised
for accelerating TTL depreciation when reusing additional data, but this is
the kind of data pattern this was designed to end.)

> It occurs to me that it is possible to retrofit existing DNS servers
> to have a static maximum timeout without any protocol modifications.

at the moment, only 2% of the queries hitting the root servers actually need
to be answered -- the rest is pure swill, just errors and sideeffects. there
is, unfortunately, no way to know in the upstream routers which 2% is which,
and so we deliver the whole thing and answer it as best we can. what this
means, though, is that the impact on a root server attack would take several
days to be felt. while pulsar-style attacks can be harder to track to source,
the "off" part of the cycle leaves time for retries to succeed and thus let
the "useful" 2% still bear some fruit. a solid attack lasting several days
would be trackable, unless it comes from a million-drone windows/xp army,
which leads to the ugly necessary of "massive-scale bgp4 anycasting", which
at least two root server operators are already planning to implement.

(or we could secure the edge, but i guess that's too hard.)

> Anyway, the way I see it since DNS already has a caching infrastructure
> built in it makes sense to take extra advantage of that infrastructure when
> things are under attack.

if people would just cache what they already receive, then 98% of the queries
seen by the root servers would never be sent. so, i think there's ample
evidence to refute the idea that the internet's caching infrastructure is
a model we can build on.

Paul Vixie

unread,
Nov 22, 2002, 9:48:20 PM11/22/02
to
> I see your point that a sustained attack should be easy to track down, but
> I think it would be really nice if we could find a way to preemptively come
> up with a way to help mitigate these attacks, instead of operating in a
> reactive mode.

well, like i said, we could secure the edge.

Bill Manning

unread,
Nov 22, 2002, 11:10:43 PM11/22/02
to

% --On Saturday, November 23, 2002 2:33 AM +0000 Paul Vixie <pa...@vix.com>
% wrote:
% > well, like i said, we could secure the edge.
%
% While we're at it why don't we deploy IPv6.
%
% Alec
%
we are. :) just not as quickly or as transparently
as some would like..



--bill

Alec H. Peterson

unread,
Nov 23, 2002, 4:01:40 AM11/23/02
to

--On Saturday, November 23, 2002 1:54 AM +0000 Paul Vixie <pa...@vix.com>
wrote:

>
> if people would just cache what they already receive, then 98% of the
> queries seen by the root servers would never be sent. so, i think
> there's ample evidence to refute the idea that the internet's caching
> infrastructure is a model we can build on.

That is a good point, that had already been made to me before I posted that
actually.

I see your point that a sustained attack should be easy to track down, but
I think it would be really nice if we could find a way to preemptively come
up with a way to help mitigate these attacks, instead of operating in a
reactive mode.

Alec

--
Alec H. Peterson -- a...@hilander.com
Chief Technology Officer
Catbird Networks, http://www.catbird.com

--

Alec H. Peterson

unread,
Nov 23, 2002, 4:02:52 AM11/23/02
to

--On Saturday, November 23, 2002 2:33 AM +0000 Paul Vixie <pa...@vix.com>
wrote:

>


> well, like i said, we could secure the edge.

While we're at it why don't we deploy IPv6.

Alec

Rob Thomas

unread,
Nov 23, 2002, 4:05:32 AM11/23/02
to

] well, like i said, we could secure the edge.

Agreed. In one study I conducted of an oft' attacked web site, 66.85%
of all naughty packets received were obvious bogons. Not just spoofed
legitimate addresses, but outright bogons. Honestly if I never receive
another packet from 127.1.2.3 I'll be a happy man. :) Think of the
amount of garbage we could avoid with some reasonably simple filtering.

The results of the study (along with a few others) can be seen in a
presentation I gave to Surfnet entitled "60 Days of Basic Naughtiness."
You will find a zip'd copy, in Powerpoint format, here:

http://www.cymru.com/Presentations/60Days.zip

The analyzed attacks are tame in comparison to what is seen today. The
simple things work, often to a great degree. Raising the bar won't
solve the world's problems, but it will make things a little better.

--
Rob Thomas
http://www.cymru.com
ASSERT(coffee != empty);

John M. Brown

unread,
Nov 23, 2002, 4:10:26 AM11/23/02
to

packet forwarding engines won't tell the difference between=20
a good query and a bad query without serious penalty to the
PFE performance.

your idea would break caching and cause more flotsam to hang=20
around in various systems.=20

personally, I like having things cache age out of the DNS
during an attack. lets me change the A RR for a victim to
a different IP. Useful for those scripts that don't update
their cached IP for the victim name.


three things will help provide better strength against DDOS
attacks.

a) properly managed anycast of the root infrastructure.

b) securing the edge of the net. remove the zombie hosts
and they can't be used as a tool.

c) signing the root zone, more for layer 8 reasons than others.

when providers decide to start applying various tools to improve
security on the edge (ergo clients) things will become better.

John M. Brown, CEO
Chagres Technologies, Inc
Le Geek


> -----Original Message-----
> From: owner-nam...@ops.ietf.org=20
> [mailto:owner-nam...@ops.ietf.org] On Behalf Of Alec H. Peterson
> Sent: Friday, November 22, 2002 5:38 PM
> To: namedr...@ops.ietf.org
> Subject: DNS Server DoS Attacks
>=20
>=20
> These attacks seem to be on the rise. There were the=20
> well-publicized ones=20
> on the root servers a few weeks ago, and yesterday UltraDNS=20
> was hit with=20
> one. It occurs to me that these attacks have the potential=20
> to completely=20
> shut down the DNS system if they are structured properly. I=20
> have a thought=20
> on how to make the system more resiliant to authoritative=20
> servers getting=20


> hammered, and was interested in some thoughts on it.

>=20
> My thought is to add another TTL to DNS responses, similar to the SOA=20
> maximum parameter. The current TTL would be similar to the=20
> SOA minimum.=20
> This would still allow for records to expire in a reasonable=20
> amount of=20
> time, but it would also allow for DNS responses to be=20
> answered in the event=20
> that servers in the hierarchy are unreachable for some=20
> reason. It occurs=20
> to me that it is possible to retrofit existing DNS servers to=20
> have a static=20


> maximum timeout without any protocol modifications.

>=20
> Anyway, the way I see it since DNS already has a caching=20
> infrastructure=20
> built in it makes sense to take extra advantage of that=20
> infrastructure when=20
> things are under attack.
>=20
> Alec
>=20


> --
> Alec H. Peterson -- a...@hilander.com
> Chief Technology Officer
> Catbird Networks, http://www.catbird.com

>=20
> --
> to unsubscribe send a message to=20
> namedroppe...@ops.ietf.org with the word 'unsubscribe'=20


> in a single line as the message text body.
> archive: <http://ops.ietf.org/lists/namedroppers/>

>=20

D. J. Bernstein

unread,
Nov 23, 2002, 4:35:23 AM11/23/02
to

[ post by non-subscriber. with the massive amount of spam, it is easy to
miss and therefore delete mis-posts. so fix subscription addresses! ]

The DNS protocol should be augmented with a separate protocol for
distributing (signed) copies of the root zone (in a sensible format)
through USENET, mailing lists, etc. ISPs can and should run local root
servers.

I agree with the idea of caching root zone data for a very long time.
The root-zone protocol should promise that every piece of data will last
for a month.

Effects on load: Everybody will receive the entire zone, rather than
just the parts they need. On the other hand, any sensible format would
be much smaller than DNS packet format. More importantly, the data will
be cached much more effectively than it is with the current root-zone
protocol. Most importantly, the load will be very widely distributed.

Side benefit: It will be easy to expand to hundreds of .com servers. Of
course, the root servers could pack more than 20 .com server addresses
into a 512-byte UDP packet with the current protocol (if they drop the
silly one-name-one-address notion), and nobody would complain if the
root servers selected those addresses randomly from a much larger pool;
but distributing the root zone lets ISPs pick nearby .com servers.

---D. J. Bernstein, Associate Professor, Department of Mathematics,
Statistics, and Computer Science, University of Illinois at Chicago

Randy Bush

unread,
Nov 23, 2002, 11:31:22 AM11/23/02
to

perhaps folk should review steve bellovin's nanog talk (dc?) on
pushback.

randy

Jim Reid

unread,
Nov 23, 2002, 1:33:59 PM11/23/02
to
>>>>> "djb" == D J Bernstein <d...@cr.yp.to> writes:

djb> The DNS protocol should be augmented with a separate protocol
djb> for distributing (signed) copies of the root zone

I can't believe you just said that. Does this mean you have recanted
on your previous strident objections to DNSSEC? :-)

Alec H. Peterson

unread,
Nov 23, 2002, 1:34:11 PM11/23/02
to

--On Saturday, November 23, 2002 12:47 AM -0700 "John M. Brown"
<jo...@chagres.net> wrote:

> packet forwarding engines won't tell the difference between

> a good query and a bad query without serious penalty to the
> PFE performance.

I don't think I understand, what does my suggestion have to do with packet
forwarding engines distinguishing between good and bad queries?

>
> your idea would break caching and cause more flotsam to hang

> around in various systems.

Well yes, it would break the current caching model by changing it.

>
> personally, I like having things cache age out of the DNS
> during an attack. lets me change the A RR for a victim to
> a different IP. Useful for those scripts that don't update
> their cached IP for the victim name.

But if your authoritative DNS servers aren't even reachable to have the
cache get re-populated then what good is it to have the cache get aged? My
proposal doesn't change the current cache aging system, you can still have
a 10 minute TTL and have an authoritative server re-query after the
original 10 minutes has expired. This just _allows_ caching nameservers to
keep stuff for longer if it is not possible to re-populate the cache due to
unreachable nameservers.

>
> three things will help provide better strength against DDOS
> attacks.
>
> a) properly managed anycast of the root infrastructure.

Agreed that anycast does help.

>
> b) securing the edge of the net. remove the zombie hosts
> and they can't be used as a tool.

Agreed.

>
> c) signing the root zone, more for layer 8 reasons than others.

I fail to see how signing the root zone would keep somebody from flooding
me with packets.

>
> when providers decide to start applying various tools to improve
> security on the edge (ergo clients) things will become better.

No argument.

Alec

--
Alec H. Peterson -- a...@hilander.com
Chief Technology Officer
Catbird Networks, http://www.catbird.com

--

D. J. Bernstein

unread,
Nov 23, 2002, 1:36:19 PM11/23/02
to

[ post by non-subscriber. with the massive amount of spam, it is easy to
miss and therefore delete mis-posts. your subscription address is
5483037468469...@sublist.cr.yp.to, please post from it or
fix subscription your subscription address! ]

PGP 2048-bit ElGamal signatures are probably the best choice for
root-zone distribution today: the signature format is reasonably simple
and reasonably well documented, and free signature-checking software is
already widely deployed. Of course, the root-zone protocol can support
multiple signatures on the same file.

Jim Reid writes:
> I can't believe you just said that. Does this mean you have recanted
> on your previous strident objections to DNSSEC? :-)

Have you stopped beating your wife, Jim?

Anyone who wants to see what I've actually said about DNSSEC should read
http://cr.yp.to/djbdns/forgery.html.

---D. J. Bernstein, Associate Professor, Department of Mathematics,
Statistics, and Computer Science, University of Illinois at Chicago

--

D. J. Bernstein

unread,
Nov 23, 2002, 3:44:48 PM11/23/02
to

[ post by non-subscriber. with the massive amount of spam, it is easy to
miss and therefore delete mis-posts. your subscription address is
5483037468469...@sublist.cr.yp.to, please post from it or
fix subscription your subscription address! ]

After receiving an email response from the Netherlands, I realize that I
should explain for international readers (and perhaps for some American
illiterates) what ``Have you stopped beating your wife?'' means.

That phrase is, at least in English, a standard reference to a fallacy
pointed out by Aristotle and also known as

* plurium interrogationum (``many questions'');
* the fallacy of interrogation;
* the fallacy of presupposition;
* the loaded-question fallacy; and
* the complex-question fallacy.

The fallacy occurs when a question posits an unproven assumption. For
example, the fallacious question ``Have you stopped beating your wife,
Jim?'' presumes that Jim has a wife, and has been beating her; there is
no evidence for these presumptions, and in fact we all presume the
opposite.

Jim asked a fallacious question making certain incorrect presumptions
about what I have said about DNSSEC. I responded by pointing out the
fallacy and the underlying facts.

John M. Brown

unread,
Nov 23, 2002, 4:27:11 PM11/23/02
to

> -----Original Message-----
> From: owner-nam...@ops.ietf.org=20
> [mailto:owner-nam...@ops.ietf.org] On Behalf Of Alec H. Peterson
> Sent: Saturday, November 23, 2002 7:49 AM
> To: jo...@chagres.net; namedr...@ops.ietf.org
> Subject: RE: DNS Server DoS Attacks
>=20
>=20
> --On Saturday, November 23, 2002 12:47 AM -0700 "John M. Brown"=20
> <jo...@chagres.net> wrote:
>=20

> > packet forwarding engines won't tell the difference between
> > a good query and a bad query without serious penalty to the PFE=20
> > performance.
>=20
> I don't think I understand, what does my suggestion have to=20
> do with packet=20

> forwarding engines distinguishing between good and bad queries?

long flight, coffee =3D=3D empty, it was part of a different =
conversation
that slipped into this thread. sorry.
=20
Some people where arguing that you could filter via ACL's and such,
that works until the poison (bad packets) looks, smells and tastes like
what you think lunch should be.
=20
> But if your authoritative DNS servers aren't even reachable=20
> to have the cache get re-populated then what good is it to=20
> have the cache get aged? =20

keeps the system cleaner. see pauls answer

> My proposal doesn't change the current cache aging system, you=20
> can still have=20
> a 10 minute TTL and have an authoritative server re-query after the=20
> original 10 minutes has expired. This just _allows_ caching=20
> nameservers to=20
> keep stuff for longer if it is not possible to re-populate=20
> the cache due to=20
> unreachable nameservers.

SO how does this caching change handle a zone going away ? If I remove
a zone from service, is that not like a DOS ??

what I'm hearing is " lets have a static value that keeps data in a
cache
longer than what the owner wants, just incase things break"... is that
correct ??
=20


> > c) signing the root zone, more for layer 8 reasons than others.

>=20
> I fail to see how signing the root zone would keep somebody=20


> from flooding me with packets.

It doesn't. It ""protects"" the zone, its a layer 8 warm and fuzzy
thing. It does help make sure that the anycast box you are asking is
serving the same data as all of the others, and if its not, you can
ignore that server.

Alec H. Peterson

unread,
Nov 23, 2002, 4:27:20 PM11/23/02
to

--On Saturday, November 23, 2002 13:33 -0700 "John M. Brown"
<jo...@chagres.net> wrote:

>
> keeps the system cleaner. see pauls answer

In the normal case, yes, and I agree that it does end up making the cache
uglier. But isn't it better to keep things running when things are going
wrong?

>
> SO how does this caching change handle a zone going away ? If I remove
> a zone from service, is that not like a DOS ??

The entries will stay around for a while. If somebody really wants all of
the DNS to disappear immediately then this would present an issue, but is
this really a huge concern?

>
> what I'm hearing is " lets have a static value that keeps data in a
> cache
> longer than what the owner wants, just incase things break"... is that
> correct ??

Not at all. The owner would have complete control over his TTLs, just like
he does now. If the owner of the zone doesn't want to take advantage of
this feature then he doesn't have to.

>
> It doesn't. It ""protects"" the zone, its a layer 8 warm and fuzzy
> thing. It does help make sure that the anycast box you are asking is
> serving the same data as all of the others, and if its not, you can
> ignore that server.

Uhm, I certainly don't disagree with that, but I still fail to see how it
relates to the discussion at hand.

Alec

--
Alec H. Peterson -- a...@hilander.com
Chief Technology Officer
Catbird Networks, http://www.catbird.com

--

John M. Brown

unread,
Nov 23, 2002, 4:29:22 PM11/23/02
to

> > It doesn't. It ""protects"" the zone, its a layer 8 warm and fuzzy=20
> > thing. It does help make sure that the anycast box you are=20
> asking is=20
> > serving the same data as all of the others, and if its not, you can=20
> > ignore that server.
>=20
> Uhm, I certainly don't disagree with that, but I still fail=20
> to see how it=20


> relates to the discussion at hand.

only that I had mentioned it as one of the things to be done
to help protect anycasting the root zones, and you had asked
how this helps protect. its now not part of the rest of
this thread.

=20
> Alec
>=20


> --
> Alec H. Peterson -- a...@hilander.com
> Chief Technology Officer
> Catbird Networks, http://www.catbird.com

>=20

Rob Payne

unread,
Nov 23, 2002, 5:07:56 PM11/23/02
to
On Sat, Nov 23, 2002 at 01:37:14PM -0700, Alec H. Peterson wrote:
> --On Saturday, November 23, 2002 13:33 -0700 "John M. Brown"
> <jo...@chagres.net> wrote:
>
> >
> >keeps the system cleaner. see pauls answer
>
> In the normal case, yes, and I agree that it does end up making the cache
> uglier. But isn't it better to keep things running when things are going
> wrong?

It is not better, if "keeping things running" keeps the problem from
getting fixed.

-rob


-- Attached file included as plaintext by Ecartis --

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.7 (GNU/Linux)

iD8DBQE93/aLgLX7/ptucWURAsfFAJ0diUQu+w1AoP2IlO92VaQnb+dL3gCghl2h
EtdHg/PAYX74Pz8ZPOPSSlo=
=we9+
-----END PGP SIGNATURE-----

Rob Payne

unread,
Nov 23, 2002, 5:08:24 PM11/23/02
to
On Sat, Nov 23, 2002 at 05:28:16PM -0000, D. J. Bernstein wrote:
> PGP 2048-bit ElGamal signatures are probably the best choice for
> root-zone distribution today: the signature format is reasonably simple
> and reasonably well documented, and free signature-checking software is
> already widely deployed. Of course, the root-zone protocol can support
> multiple signatures on the same file.

Let me see if I understand your proposal. You want to turn the root
zone into a signed "hosts.txt" (RFC 952, 953), and how, exactly does
that scale this time around when it did not scale the last time? More
distribution methods make for more attack vectors and more
opportunities for DOS against different groups. Maybe it's time to
review section 2.1 of RFC 1034 to see the problems with that model.

Your previous message said:

> The root-zone protocol should promise that every piece of data will
> last for a month.

That data should be guaranteed to last a month from when, exactly?
From the time it was signed, or from when it downloaded? The former
will mean that *everyone* will be attempting to grab this at the same
time (every thirty days from whenever this process starts), the latter
will mean that the data can *never* change. The current situation is
that data is valid for a shorter period of time (1 TTL) and systems
can grab it at any time, meaning that an attack has to last for the
current (1/2 TTL) to create an outage that will effect most systems.

If we go to a set of static data, valid for a fixed time frame we
narrow the "window of opportunity" for attack/DOS to a much smaller
period (the first [time period] at the beginning of a 30 day cycle
when everyone is grabbing the root zone, thus putting heavy loading on
servers that are distributing the new information.) How, exactly does
this provide for a system that is more resistant to attack? It
actually makes a well planned attack (around the first [time period]
of the update cycle) more likely to create an effective DOS.

And, of course, this still ignores most of the reasons for DNSSEC.
Being able to get trustworthy data from entities with unknown motives
is not possible when the data comes to you without its covering
signatures. The provider of my DNS service being able to check
signatures which they do not pass along with the data does not do
anything to provide me with usable data.

Nym-based names and bookmarks do not fix the problem. Each time a key
is compromised, the name changes (the key changes and therefore the
fingerprint of the key which makes up the nym changes). If there is
no method for a chain of trust check on DNS signature keys, owners of
hosts end up making a choice between invalidating all of the
"bookmarks" that other people have stored for their host, or
continuing to use the compromised key.

-rob

-- Attached file included as plaintext by Ecartis --

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.7 (GNU/Linux)

iD8DBQE93/YpgLX7/ptucWURAhgCAJ4hGMz3xxwJY0jlL0UQAJnViICcUgCfTXze
fz02PR42UAqSyJ0AlYIJYr8=
=B3v3
-----END PGP SIGNATURE-----

Alec H. Peterson

unread,
Nov 23, 2002, 6:34:39 PM11/23/02
to

--On Saturday, November 23, 2002 16:43 -0500 Rob Payne
<rnsp...@the-paynes.com> wrote:

>
> It is not better, if "keeping things running" keeps the problem from
> getting fixed.

*sigh*

If people think I'm advocating implementing this instead of attacking DoS
vulnerabilities at their source then obviously I'm not communicating very
well.

Alec

--
Alec H. Peterson -- a...@hilander.com
Chief Technology Officer
Catbird Networks, http://www.catbird.com

--

Alec H. Peterson

unread,
Nov 23, 2002, 6:40:41 PM11/23/02
to

So let me re-state in no un-certain terms my goal here, so that nobody can
imply anything that is not meant to be implied from my statements.

My proposal (essentially a second TTL that keeps records around longer in
the event that authoritative servers are unable to answer) is _NOT_ meant
to be any sort of 'solution' to DoS attacks. I feel that it is still
incredibally important for us to do everything that we can to secure the
edge and keep these attacks from happening.

HOWEVER, we have been trying to do that for years, and yet the DoS attacks
persist. My proposal is intended to harden our infrastructure so that we
can tolerate these attacks while they are still going on. Certainly I
would rather we secure the edge, but it is obvious that DoS attacks still
occur. I feel it is our duty to do everything we can to operate within the
parameters of reality on the Internet today.

So, we have two options. We can continue to say 'secure the edge' and hope
that makes the problem go away. Or we can continue to work on securing the
edge while also hardening our existing infrastructure to deal with the
issues we face here and now.

So, with those parameters in mind, I hope people now clearly understand my
goal here. If there has been any misunderstanding please let me know.

D. J. Bernstein

unread,
Nov 24, 2002, 1:11:57 PM11/24/02
to

[ post by non-subscriber. with the massive amount of spam, it is easy to
miss and therefore delete mis-posts. your subscription address is
5483037468469...@sublist.cr.yp.to, please post from it or
fix subscription your subscription address! ]

Rob Payne writes:
> You want to turn the root zone into a signed "hosts.txt" (RFC 952,
> 953), and how, exactly does that scale

I already answered that: ``Effects on load: Everybody will receive the


entire zone, rather than just the parts they need. On the other hand,
any sensible format would be much smaller than DNS packet format. More
importantly, the data will be cached much more effectively than it is
with the current root-zone protocol. Most importantly, the load will be

very widely distributed.''

The last factor is, as I said, the most important one. USENET wouldn't
notice if ten copies of the root zone---or ten thousand copies---were
sent out every day.

> it did not scale the last time

Nobody really tried to make it scale, but this is beside the point.
``Root zone'' does not mean ``complete list of Internet hosts.''

---D. J. Bernstein, Associate Professor, Department of Mathematics,
Statistics, and Computer Science, University of Illinois at Chicago

--

Edward Lewis

unread,
Nov 25, 2002, 10:43:18 AM11/25/02
to

At 16:20 -0700 11/23/02, Alec H. Peterson wrote:
>So let me re-state in no un-certain terms my goal here, so that nobody can
>imply anything that is not meant to be implied from my statements.
>
>My proposal (essentially a second TTL that keeps records around longer in the
>event that authoritative servers are unable to answer) is _NOT_
>meant to be any
>sort of 'solution' to DoS attacks. I feel that it is still incredibally
>important for us to do everything that we can to secure the edge and
>keep these
>attacks from happening.
>

The problem with the proposal is that it breaks one goal of the DNS.
DNS is a distributed database and as such needs to maintaining
coherency in the answers it gives. The TTL is the main parameter
that is available to allow caches (needed to alleviate high demand on
authoritative servers and paths to them) to attempt to remain
coherent. By instructing caches to hold data any longer than the
TTL, you are making it less likely the DNS can maintain coherency.

It is also important to note that the TTL is really the only way a
zone, via an authoritative server, can inform remote caches of how
often the zone expects to refresh the data. Masters and slaves have
the SOA parameters to govern zone transfer schedules. The last
parameter in the SOA is used for negative caching, and just that, in
caches.

If we were to add a second TTL (as in 'in emergency'), then a lot of
other activity in DNS would be more complicated. E.g., in
considering key roll over, we are trying to determine how much time
should elapse before removing a key. The current calculation
involves the TTL as part of an estimate as to when a key will
completely disappear from the DNS.

That's the more pragmatic answer as to why addressing DOS by
encouraging caches to hold data longer should be discouraged. A more
philosophical answer is that addressing security concerns should not
overwhelm the main mission. No answer to a vulnerability should
alter DNS's core goals, and database coherency is one of the most
important.

Since we often drop in to gross analogies when trying to dismiss an
idea, I will include this one:

This would be like amputating your hand so that that you won't break a finger.
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Edward Lewis +1-703-227-9854
ARIN Research Engineer

John S. Quarterman

unread,
Nov 25, 2002, 12:17:11 PM11/25/02
to

[ post by non-subscriber. with the massive amount of spam, it is easy to
miss and therefore delete mis-posts. so fix subscription addresses! ]

> Second it would be useful to know which systems (if any) went down. To
> date I know the identity of 5 of the 4 servers that stayed up and do not
> know the identity of a single machine that went down.

All 13 root DNS servers were up during the DDoS attack of 22-23 October 2002.
3 of them turned off ICMP ECHO responses, but were responding to DNS requests.
There were side effects on Internet performance elsewhere.

-jsq

John S. Quarterman

unread,
Nov 25, 2002, 12:45:46 PM11/25/02
to

[ post by non-subscriber. with the massive amount of spam, it is easy to
miss and therefore delete mis-posts. so fix subscription addresses! ]

>I thought that was the most likely situation.
>
>There may also have been measurement problems due to ISPs turning off
>transport of ICMP pings and due to ICMP packets being preferentially
>dropped which would explain some of the measurements.

Do you have evidence for either of those things?
If not, it would be best not to base architecture on speculation.

-jsq

John S. Quarterman

unread,
Nov 25, 2002, 2:13:09 PM11/25/02
to
Phill,

> I assert that unless you know for certain that the measurements
> are not contaminated in the manner hypothesised that you would be
> operating on speculation by accepting them at face value.
>
> The onus of proof is generally held to be on the observation.

I'm having a little trouble with this logic. You made two speculations,
with no evidence, and now you wish to shift the burden of proof for
those speculations to someone else?

> The only exception being cases like holocaust denial, moon
> landing hoax conspiracy theories and claims that OJ was innocent where
> the objections to the evidence require a vast number of other
> assumptions to be made.
>
> The figures you give, 3 servers turned off ICMP do not match the
> measurements reported in the press (9 servers down) or my reading of the
> measurement sites at the time. Ergo it appears that either more sites
> turned off ICMP than you report

I have data showing regular ping responses from the other 10 DNS roots
during that period, as well as regular ping attempts to the 3 that stopped
responding to ping, plus direct correspondence with the operators of those 3.

> or some network operations decision or
> network related effect, probably due to congenstion may have occured.

> Of course there may be yet another reason.

Indeed. One could speculate any number of potential causes for any
effect. The burden of proof would be on the person so speculating.
If you have concrete evidence for your speculations, please cite it;
otherwise, we might as well move on.

> Phill

John S. Quarterman

unread,
Nov 25, 2002, 2:29:23 PM11/25/02
to
Jon Postel used to have a rule that no one should post to namedroppers
more than once a day. The wisdom of that rule has become evident in
this thread. This will be my last post on this subject today.

> The measurements from the measurement sites plus your own observations
> show a discrepancy. The measurement sites I saw had something like 7
> sites
> offline. You say you have data showing responses from 10 of the servers.
>
> Thus there is a discrepancy of 4 servers to be accounted for and an
> indication that the effect was dependent on the observation point.

As I said, I have regular responses during that period from 10 DNS root
servers and regular pings with no responses for the other 3.
I think 10 + 3 = 13 total root DNS servers.

The data I am referring to was publicly visible at the time via
http://average.matrix.net/dns-root/

That data for each DNS root server was collected from half a dozen
different observation points, not one.

It's hard to tell what you mean by "the measurement sites".
I'd be happy hold a conversation about them, but that's not
possible without any specifics.

> The precise cause of the result is irrelevant, the only conclusions we
> can draw are that the measurements from the sites were not sound and
> that we should build measurement systems that are not subject to the
> possible sources of contamination described.

Indeed, we took such considerations into account when we constructed
our services. I don't know about "the sites".

> Phill

Vincent Renardias

unread,
Nov 25, 2002, 2:36:05 PM11/25/02
to

"D. J. Bernstein" <d...@cr.yp.to> wrote in message news:<arni4r$eg39$1...@isrv4.isc.org>...

> The DNS protocol should be augmented with a separate protocol for
> distributing (signed) copies of the root zone (in a sensible format)
> through USENET, mailing lists, etc. ISPs can and should run local root
> servers.

It's already distributed by FTP.
I even have a perl script to update my local (djbdns based :-) DNS
root server
with that info. (http://www.renardias.com/rootdnsupdate.pl)

Quick install info: setup an instance of tinydns in /etc/rootdns using
a local IP address (say 127.37.0.1), use the above script to get the
zone informations, then edit the file servers/@ from your dnscache
instance and only leave 127.37.0.1 in it.
(I used to find a very detailed webpage about the setup of djbdns as a
NS-root, but can't find the URL again)

Cordialement,

--
Vincent RENARDIAS
Directeur Technique
StrongHoldNET / http://www.strongholdnet.com

Danny Mayer

unread,
Nov 26, 2002, 12:01:46 AM11/26/02
to

At 03:33 PM 11/25/02, Hallam-Baker, Phillip wrote:

> > Indeed, we took such considerations into account when we constructed
> > our services. I don't know about "the sites".
>

>In which case we have to work out why the press either got hold of the
>wrong figures or misinterpreted the figures and used those as the basis
>of the Associated Press and Reuters reports.

This has nothing to do with namedroppers. You can't control where the
agencies and the newspapers get their information.


>By design or chance someone managed a reputation attack against the
>system
>that reached the mainstream media and resulted in senior policy people
>calling CEOs etc. That is a major problem. Media management must be a
>part of the solution.

I've almost never seen a newspaper article on something for which I have
direct personal knowledge get it right. However, this is not a TECHNICAL
problem, it's a management problem. You're asking in the wrong forum.

Danny


> Phill

0 new messages