Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: Protecting caches?

2 views
Skip to first unread message

Florian Weimer

unread,
Aug 29, 2008, 2:52:07 AM8/29/08
to
* Andrew Sullivan:

> What I haven't seen much of is discussion of protecting caches _as
> such_. That is, given that we are going to cache, are there
> techniques that solve the dangers of a cache other than just
> preventing the cache from ever having the wrong data in the first
> place?=20=20

Here's an idea to contain the damage of a bad record by reducing the
amplification effect (attack one cache, compromise the network view
for thousands of clients):

If information enters the cache for the first time, you only use it a
fixed number of times before fetching it again from the network (say
10). If the usage count is reached, you fetch the data from the
network (even if the TTL has not yet expired). If it is still the
same, you double the counter. If it is different, you fail back to
the initial value.

This will cause additional DNS traffic for those who publish unstable
data. It is also reasonable to disable it during the cache warm-up
phase, and to keep records past their TTL (to avoid the cold start;
this makes sense for DNSSEC as well, were you often can avoid
cryptographic operations if the freshly fetched RRSIG matches the
stored one).

--=20
Florian Weimer <fwe...@bfk.de>
BFK edv-consulting GmbH http://www.bfk.de/
Kriegsstra=DFe 100 tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

--
to unsubscribe send a message to namedroppe...@ops.ietf.org with
the word 'unsubscribe' in a single line as the message text body.
archive: <http://ops.ietf.org/lists/namedroppers/>

Paul Hoffman

unread,
Aug 29, 2008, 3:36:21 PM8/29/08
to
At 1:08 PM -0400 8/27/08, Andrew Sullivan wrote:
>What I haven't seen much of is discussion of protecting caches _as
>such_. That is, given that we are going to cache, are there
>techniques that solve the dangers of a cache other than just
>preventing the cache from ever having the wrong data in the first
>place?

What I have not seen, even post-Kaminsky, is a good discussion of
what we put into a cache. For example, I am still befuddled about why
part of the Kaminsky attack works. If I have a record in my cache
with days left on the TTL, why should an attacker be able to change
that record with bad information when I'm asking about a different
record? The advantage of this ("we gave too long of a TTL and now
need to move the IP address quickly") seems to be heavily outweighed
by the ease of the attack.

--Paul Hoffman, Director
--VPN Consortium

Paul Hoffman

unread,
Aug 29, 2008, 4:17:58 PM8/29/08
to
At 7:55 PM +0000 8/29/08, Paul Vixie wrote:
> > What I have not seen, even post-Kaminsky, is a good discussion of
>> what we put into a cache. For example, I am still befuddled about why
>> part of the Kaminsky attack works. If I have a record in my cache
>> with days left on the TTL, why should an attacker be able to change
>> that record with bad information when I'm asking about a different
>> record? The advantage of this ("we gave too long of a TTL and now
>> need to move the IP address quickly") seems to be heavily outweighed
>> by the ease of the attack.
>>
>> --Paul Hoffman, Director
>> --VPN Consortium
>
>RFC 2181 codified a credibility ranking system which controlled some
>aspects of cache replacement.

Right.

>it's possible that it should have said
>more, gone further.

Or that it should be reconsidered in the current light. A small
modification to the text might go a long way. For example, it says:

Unauthenticated RRs received and cached from the least trustworthy of
those groupings, that is data from the additional data section, and
data from the authority section of a non-authoritative answer, should
not be cached in such a way that they would ever be returned as
answers to a received query. They may be returned as additional
information where appropriate. Ignoring this would allow the
trustworthiness of relatively untrustworthy data to be increased
without cause or excuse.

Adding "or glue from a primary zone" to the list in the first
sentence would eliminate the effects of the Kaminsky attack, would it
not?

>however, it's ultimately desireable that a long
>TTL not lock out same-credibility replacement data. TTL is about
>expiry not replacement.

That is the crux of the question. It sounds like you disagree with my
assessment that the disadvantage of this policy (the Kaminsky attack)
is stronger than the advantage (allowing replacement before expiry,
even if the replacement is from an attacker).

Paul Hoffman

unread,
Aug 29, 2008, 6:36:00 PM8/29/08
to
At 8:26 PM +0000 8/29/08, Paul Vixie wrote:
> > Adding "or glue from a primary zone" to the list in the first sentence
>> would eliminate the effects of the Kaminsky attack, would it not?
>
>for replacement, but not insertion, yes.

Agree.

> > >however, it's ultimately desireable that a long TTL not lock out
>> >same-credibility replacement data. TTL is about expiry not replacement.
>>
>> That is the crux of the question. It sounds like you disagree with my
>> assessment that the disadvantage of this policy (the Kaminsky attack) is
>> stronger than the advantage (allowing replacement before expiry, even if
>> the replacement is from an attacker).
>

>i know there are a lot of working configurations which depend on low latency
>cache replacement and so have long ttl's. we'd be breaking these. i suspect
>that there are more such currently working configurations than i know about.

Could you (or someone) describe these? What is their reason for a
long TTL if they know there is going to be a replacement?

bert hubert

unread,
Aug 30, 2008, 3:48:15 PM8/30/08
to
On Fri, Aug 29, 2008 at 12:36:21PM -0700, Paul Hoffman wrote:
> part of the Kaminsky attack works. If I have a record in my cache
> with days left on the TTL, why should an attacker be able to change
> that record with bad information when I'm asking about a different
> record? The advantage of this ("we gave too long of a TTL and now

Most programmers of servers used to act on the assumption that packets (and
by extension, questions) were expensive. An answer carrying 'free' data for
which the resolver considered it authoritative was more than welcome in this
respect.

Plus the oft quoted credibility rules of course.

I'm moving to having a switch that lets data with an unexpired TTL not be
overwritten by newer answers.

Bert

--
http://www.PowerDNS.com Open source, database driven DNS Software
http://netherlabs.nl Open and Closed source services

Peter Koch

unread,
Sep 15, 2008, 4:30:22 AM9/15/08
to
On Fri, Aug 29, 2008 at 01:17:58PM -0700, Paul Hoffman wrote:

> Or that it should be reconsidered in the current light. A small
> modification to the text might go a long way. For example, it says:
>
> Unauthenticated RRs received and cached from the least trustworthy of
> those groupings, that is data from the additional data section, and
> data from the authority section of a non-authoritative answer, should
> not be cached in such a way that they would ever be returned as
> answers to a received query. They may be returned as additional
> information where appropriate. Ignoring this would allow the
> trustworthiness of relatively untrustworthy data to be increased
> without cause or excuse.
>

> Adding "or glue from a primary zone" to the list in the first
> sentence would eliminate the effects of the Kaminsky attack, would it
> not?

Not sure I understand that fragment. Once data has been put into the
additional section, you no longer know whether it originated from glue
records, from cache content or from data occasionally authoritatively
available at the responding server. Or are you suggesting not to use
glue RRs to fill the answer section? That would aim at servers rather than
resolvers, then.

-Peter

Paul Hoffman

unread,
Sep 15, 2008, 11:00:35 AM9/15/08
to
At 10:30 AM +0200 9/15/08, Peter Koch wrote:
>On Fri, Aug 29, 2008 at 01:17:58PM -0700, Paul Hoffman wrote:
>
>> Or that it should be reconsidered in the current light. A small
>> modification to the text might go a long way. For example, it says:
>>
>> Unauthenticated RRs received and cached from the least trustworthy of
>> those groupings, that is data from the additional data section, and
>> data from the authority section of a non-authoritative answer, should
>> not be cached in such a way that they would ever be returned as
>> answers to a received query. They may be returned as additional
>> information where appropriate. Ignoring this would allow the
>> trustworthiness of relatively untrustworthy data to be increased
>> without cause or excuse.
>>
>> Adding "or glue from a primary zone" to the list in the first
>> sentence would eliminate the effects of the Kaminsky attack, would it
>> not?
>
>Not sure I understand that fragment. Once data has been put into the
>additional section, you no longer know whether it originated from glue
>records, from cache content or from data occasionally authoritatively
>available at the responding server. Or are you suggesting not to use
>glue RRs to fill the answer section? That would aim at servers rather than
>resolvers, then.

I am proposing that information in the glue records that is not
authenticated never be put in the cache of the recursive server.

--Paul Hoffman, Director
--VPN Consortium

--

Mark Andrews

unread,
Sep 15, 2008, 7:46:15 PM9/15/08
to

In message <p06240803c4f428d85b4f@[10.20.30.152]>, Paul Hoffman writes:
> I am proposing that information in the glue records that is not
> authenticated never be put in the cache of the recursive server.

It's possible, I suppose, if one wants to re-delegate just
about every delegation on the planet to all have glue less
delegations. Otherwise recursive servers need to use (which
requires internal caching) the glue to get to the answers.

It would make it possible to have all records signed by the
parent when performing a referral. The signatures on
delegating NS RRset would be easy to identify as the signer
would not match the owner of NS RRset.

Mark
--
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: Mark_A...@isc.org

Florian Weimer

unread,
Sep 30, 2008, 4:18:30 AM9/30/08
to
* Florian Weimer:

> Here's an idea to contain the damage of a bad record by reducing the
> amplification effect (attack one cache, compromise the network view
> for thousands of clients):
>
> If information enters the cache for the first time, you only use it a
> fixed number of times before fetching it again from the network (say
> 10). If the usage count is reached, you fetch the data from the
> network (even if the TTL has not yet expired). If it is still the
> same, you double the counter. If it is different, you fail back to
> the initial value.
>
> This will cause additional DNS traffic for those who publish unstable
> data.

After further consideration, this is mostly equivalent to the "double
up" approach (send all queries twice and compare the result). The
main improvements are latency hiding, dealing with volatile data, and
(with a protocol change) signaling data volatility to upstream
servers. This proposal requires caching of referrals, which current
implementations don't seem to do, adding quite a bit of overhead in
terms of cache size. Compared to the "double up" approach, the number
of upstream queries is not significantly reduced.

--=20
Florian Weimer <fwe...@bfk.de>
BFK edv-consulting GmbH http://www.bfk.de/
Kriegsstra=DFe 100 tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

--

Danny Mayer

unread,
Oct 1, 2008, 12:21:23 AM10/1/08
to
Andrew Sullivan wrote:
> No hat.
>
> Dear colleagues,
>
> In the discussion around forgery resilience so far, I've seen some
> discussion on how to detect cache poisoning attempts, and some
> discussion on how to foil the attack itself by reducing the ability of
> the attacker to send bad data to a resolver. (I've also seen the
> observation, several times, that DNSSEC will solve these problems
> anyway, so we should concentrate on deploying that; that's an
> operational problem now, and not on-topic for this list.)

I don't altogether agree with this. DNSSEC will only protect up to the
resolver. What people keep forgetting about in this discussion is that
what you are really trying to protect is the application, most likely a
browser, from receiving bad answers, and therefore preventing the DNS
resolver from returning bad answers is not enough. The application is
one or two hops away from the resolving DNS. Some systems like Windows
uses a DNS cache to cache answers on top of everything else. As long as
getaddrinfo() and friends cannot request and receive a DNSSEC
"certified" response you are dropped right back to the original
problems. Worse than that, if you manage to poison the answers in the
resolver, you also poison the on-system cache AND the application. How
many applications request new answers once they have one? getaddrinfo()
and friends don't even tell you what the TTL on the answers you do get
so you have no way of deciding when to re-request an answer in the first
place even if the address was originally valid.

Danny

bman...@vacation.karoshi.com

unread,
Oct 1, 2008, 7:08:22 PM10/1/08
to
> > This is generally regarded as a primary feature, the implementation
> > "fails safe" but with no way to override unless a
>
> And my Kiboizer alerts...
>
> > getaddrinfo_no_DNSSEC() function is added in the API. This is also
> > considered a primary bug when you talk to Eric Rescorla, as SSL/TLS
> > had this behavior initially and it proved problematic: most "failures"
> > were benign.
>
> It's not clear how this experience transfers to DNSSEC, of course,
> since SSL/TLS provides benefits (confidentiality) against passive
> attacks even if the server uses a self-signed cert, whereas DNSSEC
> does not, so there's less incentive to deploy in a weak way. However,
> that still leaves open a variety of misconfigurations: all of these
> cryptographic techniques invert Postel's dictum about being liberal in
> what you accept. so if there is any reasonable base rate of
> misconfigurations, e.g., signatures outside their validity windows,
> then there's going to be a real issue in determining how to handle
> verification failures, especially with the limited channel available
> between the resolver and the user.
>
> -Ekr

the other option is to "fail open" ... eg. things are
just like they are now.

--bill

bman...@vacation.karoshi.com

unread,
Oct 1, 2008, 9:39:55 PM10/1/08
to
On Wed, Oct 01, 2008 at 06:40:30PM -0700, Eric Rescorla wrote:
> At Wed, 1 Oct 2008 23:08:22 +0000,

> bman...@vacation.karoshi.com wrote:
> >
> > > > This is generally regarded as a primary feature, the implementation
> > > > "fails safe" but with no way to override unless a
> > >
> > > And my Kiboizer alerts...
> > >
> > > > getaddrinfo_no_DNSSEC() function is added in the API. This is also
> > > > considered a primary bug when you talk to Eric Rescorla, as SSL/TLS
> > > > had this behavior initially and it proved problematic: most "failures"
> > > > were benign.
> > >
> > > It's not clear how this experience transfers to DNSSEC, of course,
> > > since SSL/TLS provides benefits (confidentiality) against passive
> > > attacks even if the server uses a self-signed cert, whereas DNSSEC
> > > does not, so there's less incentive to deploy in a weak way. However,
> > > that still leaves open a variety of misconfigurations: all of these
> > > cryptographic techniques invert Postel's dictum about being liberal in
> > > what you accept. so if there is any reasonable base rate of
> > > misconfigurations, e.g., signatures outside their validity windows,
> > > then there's going to be a real issue in determining how to handle
> > > verification failures, especially with the limited channel available
> > > between the resolver and the user.
> > >
> > > -Ekr
> >
> > the other option is to "fail open" ... eg. things are
> > just like they are now.
>
> This seems plausible in some cases but not others. I.e., if the
> signature is expired, that's probably fine. However, if the
> signature is totally broken, doesn't accepting it kind of obviate
> the point of DNSSEC?
>
> -Ekr

well, i'd not accept it - I'd flag it as "not validatable" and
proceed w/ resolution.

sure - but then we have dns working at the same level as
today. I'd rather have the DNS continue to work, for some
level of work, than for it to fail and be dependent on someone
else to correct the errors. Granted, this might be a tuneable
knob that some operational enclaves will want to set to fail closed,
but a "least surprise" default seems to be to fail open.

a default of servfail on validation failure seems like a nearly
insurmountable hurdle for adoption.

bman...@vacation.karoshi.com

unread,
Oct 2, 2008, 8:37:33 AM10/2/08
to
On Wed, Oct 01, 2008 at 03:40:00AM -0700, Nicholas Weaver wrote:
>
> The getaddrinfo() relies on the host's stub resolver. If the host
> stub resolver respects DNSSEC, you do get protection up to
> getaddrinfo(), because I think most implementations will silently fail
> when DNSSEC fails.

if this is the case, then DNSSEC is a non-starter.
actually, i've yet to see a validator be stateless enough
to be able to be wrapped into a system call.

remember, resolution and validation are different things.
getaddrinfo() is a resolution tool, an ephemeral system call
that maintains no state.

validation depends on resolution working - so as to be able to
get the data needed to perform the function. and validation
requires state and (to be effective) a cache.


> This is generally regarded as a primary feature, the implementation
> "fails safe" but with no way to override unless a

> getaddrinfo_no_DNSSEC() function is added in the API. This is also
> considered a primary bug when you talk to Eric Rescorla, as SSL/TLS
> had this behavior initially and it proved problematic: most "failures"
> were benign.

regarded as a feature by whom?

> The fundamental problem with DNSSEC is how the application USES
> getaddrinfo(): it either trusts the name or it doesn't.

again, you are conflating resoultion w/ validation.
getaddrinfo() returns data ... w/o any warrantees on
accuracy or integrity. if you want that, then you have
to involke a validator. the "trust" does not lie w/ the systemcall.


> If its an end-to-end secure application, it never actually trusted the
> name, because the application doesn't trust the network, so DNSSEC
> bought nothing. Zippo. Zilch. Nada.

unless the application under discussion is DNS.
and even then, DNSSEC doesn't care abt trusting the
network or the intermediate nodes that might hold the
data.


> Thus from a cryptographic security viewpoint, DNSSEC is exactly
> backwards: the PRIMARY threat model addressed, a man-in-the-middle, is
> not addressable at the naming operation, but must be addressed at the
> application or application transport layer. Period.

i think i misunderstand your terms here:

naming operation
application

DNSSEC protects the applicaiton DNS. Trust is not transitive
and can not be presumed to map to other applications. The best
DNSSEC can do is assure the data received was received intact
from a known source. what SSH or SMTP or NTP does with the
data returned from a resolution request is up to that application,
not the DNS.

> The secondary problem is one of deployment: Both the stub resolvers
> (Microsoft, Apple) need to be updated to check DNSSEC, the recursive
> resolvers must pass the information unchanged or check DNSSEC, and the
> authoritative servers must provide DNSSEC, the latter being a big
> pain, in many ways due to trust anchor issues. eg, see Clayton's rant
> on trying to set up a DNSSEC-supporting domain:
> http://www.lightbluetouchpaper.org/2008/09/29/root-of-trust/

its not that hard - many folks are signing their data.
i remain unconvinced that stub resolvers need to be updated.
I belive validation can and should be done asyncronously from
resolution. And it is true that a validator API is sadly
lacking - although there have been several efforts over the
years to bring something forward.


> This is why I believe that the problem with out-of-path adversaries
> (blind cache poisoning) needs to be addressed in the resolvers,
> without changing the protocol and without changing the authorities.
> And I believe there is a lot more than just "more entropy" which can
> be done.

Edward Lewis

unread,
Oct 6, 2008, 11:08:02 AM10/6/08
to
At 18:40 -0700 10/1/08, Eric Rescorla wrote:
>At Wed, 1 Oct 2008 23:08:22 +0000,
>bman...@vacation.karoshi.com wrote:

>> the other option is to "fail open" ... eg. things are
>> just like they are now.
>
>This seems plausible in some cases but not others. I.e., if the
>signature is expired, that's probably fine. However, if the
>signature is totally broken, doesn't accepting it kind of obviate
>the point of DNSSEC?

Smoothing over bumps in security is a fine way to let operational
issues fester unnoticed. Absent proof of widespread, successful,
attacks on DNS caches today (I am told there are many attacks that
don't get air time), I bet that the majority of DNSSEC "positives"
will be the result of misconfigurations.

A common one will be maladjusted clocks on verifiers, especially if a
user machine attempts to take validation into it's own hands. I base
this on our experience in the early workshops before NTP was on all
our machines (because we were so burned by time). (I once had a
machine that had the right time of day and month/day, but the year
was off by one. That took a lot of squinting at screens and paper
before I noticed it.)

There are five entities involved in an canonical verification - user,
ISP, root, TLD, enterprise. While errors may be rare, I bet errors
will still outpace attacks. Especially after DNSSEC has been running
for some time.

So failing open is probably a bad strategy, neglecting the security
implications, for operational reasons much less obviating the point
of DNSSEC.

--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Edward Lewis +1-571-434-5468
NeuStar

Never confuse activity with progress. Activity pays more.

Edward Lewis

unread,
Oct 7, 2008, 8:08:49 AM10/7/08
to
At 20:55 -0700 10/6/08, Eric Rescorla wrote:

>I totally agree with this claim, but if your resolver
>accepts bogus signatures, then what's the point of having
>signatures at all?

Depends on the exact meaning of "accepts." For normal use, a bogus
signature should never be shrugged off or it is useless to invest the
time in the entire franchise. The only time data with a bogus
signature should be "accepted" is in a diagnostic mode, testing to
determine if the bogosity is the result of an operational error or
indeed a malicious attack. Accepted in the latter case means "is
inspected."

I think the result of DNSSEC is binary to an application - got a
response[0] or didn't.

[0] - includes NXDOMAIN and all other results.

Edward Lewis

unread,
Oct 7, 2008, 12:56:19 PM10/7/08
to
At 20:55 -0700 10/6/08, Eric Rescorla wrote:

>I totally agree with this claim, but if your resolver
>accepts bogus signatures, then what's the point of having
>signatures at all?

Perhaps my "is a fine way" and "fester unnoticed" weren't taken
sarcastically enough. ;)

If data accompanied by a signature that is proven to be invalid is
accepted at all, then the effort is a waste. And what I meant was,
in addition, if you do you are enabling poor operational practices
and other poor habits.

The only time a DNS verifier/iterator should pass on data that failed
verification is within a debugging/reporting context.

Edward Lewis

unread,
Oct 7, 2008, 4:25:21 PM10/7/08
to
At 13:11 -0700 10/7/08, Eric Rescorla wrote:

>I tend to agree with this, but isn't the consequence going to be a lot
>of hard failures due to otherwise innocuous misconfigurations?

Yeah.

How do you decide between a bogus signature due to maliciousness and
a bogus signature due to misconfiguration and a bogus signature due
to a disagreement over policy, etc.?

(Perhaps I should add, if there's just one signature to consider then
it's cut and dry. If there are multiple signatures, if one works we
are cool, if none work there's trouble. That is one way around
disagreement over policy/some misconfigurations, etc.)

Edward Lewis

unread,
Oct 7, 2008, 5:14:01 PM10/7/08
to
At 16:51 -0400 10/7/08, Brian Dickson wrote:

>Seriously, though, the onus is on the operators of authority servers,
>and to a slightly lesser degree, validating resolver operators,
>to ensure that the bits they run, don't break when they start doing dnssec.

I believe the reverse will be true. When a web client user
encounters an error due to a DNSSEC thumbs down, they call the ISP
help desk (ie, "the Internet is down"). The ISP will "pay" when
there are errors, therefore that's where the onus matters.

The authority servers (root, tld, enterprise) all have
responsibilities to make sure the DNSSEC is set up right, true. It
is up to the ISP to make sure the validating cache server has the
right keys and to have a help desk that can deal with the support
calls. Worse for the ISP, there are multiple authority servers were
mistakes can be made for just one validation chain, as well as user
(ISP customer) mistakes.

(One unwritten [until now] assumption is that the costs for help desk
are considerable when the calls roll in.)

The onus isn't only on the authority servers, but they do need to do
things right.

>And, the end-user really, really should never be able to disable the
>validation. Ever. For anyone. For any reason.

I don't know about that. If the problem is more likely to be due to
an operational error then it is more likely not to be dangerous to
turn off DNSSEC.

'Course, I am making a wild assumption that the incidence of "fat
fingering" is greater than actual attacks. This assumption is not
supported by any facts.

bman...@vacation.karoshi.com

unread,
Oct 7, 2008, 6:32:31 PM10/7/08
to
On Tue, Oct 07, 2008 at 05:14:01PM -0400, Edward Lewis wrote:
> At 16:51 -0400 10/7/08, Brian Dickson wrote:
>
> >Seriously, though, the onus is on the operators of authority servers,
> >and to a slightly lesser degree, validating resolver operators,
> >to ensure that the bits they run, don't break when they start doing dnssec.
>
> I believe the reverse will be true. When a web client user
> encounters an error due to a DNSSEC thumbs down, they call the ISP
> help desk (ie, "the Internet is down"). The ISP will "pay" when
> there are errors, therefore that's where the onus matters.

and this will work when voice is VoIP/SIP/ENUM - dependent on
DNS lookups working? can they still call anyone at that point?

> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Edward Lewis +1-571-434-5468
>

--bill

Mark Andrews

unread,
Oct 7, 2008, 8:46:09 PM10/7/08
to

In message <20081007223...@vacation.karoshi.com.>, bman...@vacation.ka

roshi.com writes:
> On Tue, Oct 07, 2008 at 05:14:01PM -0400, Edward Lewis wrote:
> > At 16:51 -0400 10/7/08, Brian Dickson wrote:
> >
> > >Seriously, though, the onus is on the operators of authority servers,
> > >and to a slightly lesser degree, validating resolver operators,
> > >to ensure that the bits they run, don't break when they start doing dnssec
> .
> >
> > I believe the reverse will be true. When a web client user
> > encounters an error due to a DNSSEC thumbs down, they call the ISP
> > help desk (ie, "the Internet is down"). The ISP will "pay" when
> > there are errors, therefore that's where the onus matters.
>
> and this will work when voice is VoIP/SIP/ENUM - dependent on
> DNS lookups working? can they still call anyone at that point?

Use a land line, use mobile, use the neibours phone.



> > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> > Edward Lewis +1-571-434-5468
> >
>
> --bill
>
> --
> to unsubscribe send a message to namedroppe...@ops.ietf.org with
> the word 'unsubscribe' in a single line as the message text body.
> archive: <http://ops.ietf.org/lists/namedroppers/>

--
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: Mark_A...@isc.org

--

Edward Lewis

unread,
Oct 8, 2008, 7:36:41 AM10/8/08
to
At 14:21 -0700 10/7/08, Nicholas Weaver wrote:
>So if DNSSEC results in more tech support calls because items don't resolve,
>through fault of the authorities and not the ISP's resolver, no ISP in its
>right mind would support it! Or if support is mandated, it will be set to
>"well, fail open anyway so it doesn't matter, but we can say we
>support DNSSEC"

That's not a foregone conclusion though. As much work as it is for
an ISP and as much of a chance that there may be hard work to avoid
problems, ISPs do have the incentive to make sure that the DNS
servers they operate give accurate answers to their customers.

There are ISPs interested in DNSSEC. ISPs in Sweden are using it.
Recently this too: http://www.dnssec.comcast.net/.

And this relates a story about galve.se and an error. In this case
DNSSEC was on-off-on.
http://www.aptld.org/taipeifebruary2008/20-dnssec_2008_APTLD.pdf.
Unfortunately the slides are not numbered, the three pertinent slides
are towards the end. Just look for "galve.se".

--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Edward Lewis +1-571-434-5468
NeuStar

Never confuse activity with progress. Activity pays more.

--

Edward Lewis

unread,
Oct 8, 2008, 10:17:39 AM10/8/08
to
At 22:55 -0400 10/7/08, Griffiths, Chris wrote:

>We talk about VOIP/ENUM/SIP and frankly that scares me. What also
>scares me is broken keys for sites like Ebay, Facebook or Myspace if and
>when they exist. How are users going react when they can't get to these
>from their Phone or mobile device or even - gasp at their home just
>surfing the web.

I'm pretty confident that these dangers won't be realized. (Of
course, DNSSEC even with the current momentum it has might never be
realized.) Like everything else, before anything goes into
mainstream production there will be a long trail of testing,
documentation, process development, review (and all of that other
boring stuff big companies do that open sourcers seem to hate) before
it sees the light of day. Getting to production is going to take
work. Getting DNSSEC to be relied upon is even more work.

I gave a talk last week. I thought that the talk was fairly
pessimistic about the future of DNSSEC because I presented a laundry
list of the costs of deploying DNSSEC to just a ccTLD. I spent very
little time on the revenue potentials. Afterwards someone commented
that my talk was rather positive and encouraging. I asked "in what
way?" They said that everything of value has a lot of costs in
development, the cost element wasn't a scare to that level of
management.

As far as combining voice and data on one network, my boss in the
early 90's already expressed that concern. Before any situation that
requires putting eggs in baskets, you can bet there will be a lot of
review. On the other hand, remember what it was like when there was
no data network, no cellular, all we had then were land lines? It
worked, via regulations and details.

One of the lines from my presentation last week was "don't let
engineers confuse cost with budget." What that meant was that often
we have the tendency to look at a load of work and decide that it is
too costly. We don't look at the benefit of the work though. DNSSEC
might be another case. It's a lot of work to pull off, I mean all the
elements working together. But the safety it offers, or the value of
the safety, is currently high (the value fluctuates like any other
market) and folks who hold the budget see a net positive in it.

bman...@vacation.karoshi.com

unread,
Oct 8, 2008, 1:32:33 PM10/8/08
to
On Wed, Oct 08, 2008 at 07:36:41AM -0400, Edward Lewis wrote:
> At 14:21 -0700 10/7/08, Nicholas Weaver wrote:
> >So if DNSSEC results in more tech support calls because items don't
> >resolve,
> >through fault of the authorities and not the ISP's resolver, no ISP in its
> >right mind would support it! Or if support is mandated, it will be set to
> >"well, fail open anyway so it doesn't matter, but we can say we
> >support DNSSEC"
>
> That's not a foregone conclusion though. As much work as it is for
> an ISP and as much of a chance that there may be hard work to avoid
> problems, ISPs do have the incentive to make sure that the DNS
> servers they operate give accurate answers to their customers.

indeed they do, since they are the folks holding the
liability.

> There are ISPs interested in DNSSEC. ISPs in Sweden are using it.
> Recently this too: http://www.dnssec.comcast.net/.

all really good stuff.

> And this relates a story about galve.se and an error. In this case
> DNSSEC was on-off-on.
> http://www.aptld.org/taipeifebruary2008/20-dnssec_2008_APTLD.pdf.
> Unfortunately the slides are not numbered, the three pertinent slides
> are towards the end. Just look for "galve.se".

so - back to my original point. I think that for -now-
having DNSSEC fail-open is a much more credible stance than
taking a hardline - fail-closed - model. Lets save fail-closed
until after we get some more traction.

what was that Postel credo?

--bill

bman...@vacation.karoshi.com

unread,
Oct 8, 2008, 1:39:27 PM10/8/08
to
On Wed, Oct 08, 2008 at 12:42:43PM -0400, Andrew Sullivan wrote:
> [no hat]

>
> On Tue, Oct 07, 2008 at 04:25:21PM -0400, Edward Lewis wrote:
>
> > How do you decide between a bogus signature due to maliciousness and a
> > bogus signature due to misconfiguration and a bogus signature due to a
> > disagreement over policy, etc.?
>
> You don't, from the point of view of an application. In my reading,
> validation failure is validation failure, full stop. This is related
> to my belief that there are no levels of trust in DNSSEC, and that if
> we want to add that gewgaw, we have some work to do.
>
> A
>

oh... we need to change your beliefs then :)

sure, validation failure is validation failure... but when and
from whom? Ed leaves out transient network failure as areason
for validation failure... when validation fails, how often do
you retry?

--bill

bman...@vacation.karoshi.com

unread,
Oct 8, 2008, 1:50:34 PM10/8/08
to
On Mon, Oct 06, 2008 at 11:08:02AM -0400, Edward Lewis wrote:
> At 18:40 -0700 10/1/08, Eric Rescorla wrote:
> >At Wed, 1 Oct 2008 23:08:22 +0000,
> >bman...@vacation.karoshi.com wrote:
>
> >> the other option is to "fail open" ... eg. things are
> >> just like they are now.
> >
> >This seems plausible in some cases but not others. I.e., if the
> >signature is expired, that's probably fine. However, if the
> >signature is totally broken, doesn't accepting it kind of obviate
> >the point of DNSSEC?
>
> Smoothing over bumps in security is a fine way to let operational
> issues fester unnoticed. Absent proof of widespread, successful,
> attacks on DNS caches today (I am told there are many attacks that
> don't get air time), I bet that the majority of DNSSEC "positives"
> will be the result of misconfigurations.


I remember the first pass @ DNSSEC ... the primary drivers
were from security folks looking for perfect security.
And what we got was not operationally deployable.

DNS - as it exists - is full of operational issues and yet
it works well enough that it has become a serious core
technology.

I expect that - in this pass - we will also have many operational
issues w/ DNSSEC deployment that will be smoothed over - if only
to keep things working in/through transition.

wrt yoru bet - i'll not take it.

> A common one will be maladjusted clocks on verifiers, especially if a
> user machine attempts to take validation into it's own hands. I base
> this on our experience in the early workshops before NTP was on all
> our machines (because we were so burned by time). (I once had a
> machine that had the right time of day and month/day, but the year
> was off by one. That took a lot of squinting at screens and paper
> before I noticed it.)

Ah yes - DNS aquires a dependence on an external application.

> There are five entities involved in an canonical verification - user,
> ISP, root, TLD, enterprise. While errors may be rare, I bet errors
> will still outpace attacks. Especially after DNSSEC has been running
> for some time.

actually, I think there are only three entities in canonical validation.
the validator, the publisher(s) e.g. authoritative and caching servers, and
the zone key holder(s).

> So failing open is probably a bad strategy, neglecting the security
> implications, for operational reasons much less obviating the point
> of DNSSEC.

likely true, but it is better than any of the alternatives.

--bill

>
> --
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Edward Lewis +1-571-434-5468
> NeuStar
>
> Never confuse activity with progress. Activity pays more.

--

bman...@vacation.karoshi.com

unread,
Oct 8, 2008, 1:52:16 PM10/8/08
to
On Tue, Oct 07, 2008 at 12:56:19PM -0400, Edward Lewis wrote:
> At 20:55 -0700 10/6/08, Eric Rescorla wrote:
>
> >I totally agree with this claim, but if your resolver
> >accepts bogus signatures, then what's the point of having
> >signatures at all?
>
> Perhaps my "is a fine way" and "fester unnoticed" weren't taken
> sarcastically enough. ;)
>
> If data accompanied by a signature that is proven to be invalid is
> accepted at all, then the effort is a waste. And what I meant was,
> in addition, if you do you are enabling poor operational practices
> and other poor habits.
>
> The only time a DNS verifier/iterator should pass on data that failed
> verification is within a debugging/reporting context.


I disagree... but that kind of depends on what is ment
by "reporting".

Edward Lewis

unread,
Oct 8, 2008, 2:03:08 PM10/8/08
to
At 17:50 +0000 10/8/08, bman...@vacation.karoshi.com wrote:

> actually, I think there are only three entities in canonical
>validation.
> the validator, the publisher(s) e.g. authoritative and caching servers,
> and the zone key holder(s).

You are counting "publisher(s)" as one when you should be counting it
as at least 3. That's why you don't get a 5. The root zone manager,
the tld manager, and the enterprise are three independent
organizations that may need to be consulted to see why a certain DS
RR wasn't uploaded in a certain zone.

Edward Lewis

unread,
Oct 8, 2008, 1:59:02 PM10/8/08
to
At 17:32 +0000 10/8/08, bman...@vacation.karoshi.com wrote:

> so - back to my original point. I think that for -now-
> having DNSSEC fail-open is a much more credible stance than
> taking a hardline - fail-closed - model. Lets save fail-closed
> until after we get some more traction.

I'm pessimistic about a plan that includes "later on, we tighten security."

We thought of this back in the day. Lacking the ability to recall or
disable older versions of software, the only way you can change a
code base to support a new paradigm is to get users to upgrade.
Upgrades only come when the user wants to upgrade. And who wants to
get a new car because it has tighter seat belts?

Until the model is "fail closed" DNSSEC can't be relied upon.

With a fail open model letting sloppy operations habits fester
unabated, we will never get to a mature state. Fail open will only
get us what 6Bone got IPv6 - tools and software but not enough
operational reality to get the technology deployed. (For example,
making use of tunnels deferred thought about IPv6 routing/forwarding
table issues.)

> what was that Postel credo?

(http://en.wikipedia.org/wiki/Postel's_law)

We aren't taking about robustness, we are talking about security.

Edward Lewis

unread,
Oct 8, 2008, 2:14:52 PM10/8/08
to
At 17:39 +0000 10/8/08, bman...@vacation.karoshi.com wrote:

> oh... we need to change your beliefs then :)
>
> sure, validation failure is validation failure... but when and
> from whom? Ed leaves out transient network failure as areason
> for validation failure... when validation fails, how often do
> you retry?

If there is a transient network failure lasting longer than the
maximum allowed time to answer a question - it's unlikely that it is
DNSSEC validation that is the bottleneck. If there is such a
failure, you'd probably not get the answer either. That's why I
don't consider it.

bman...@vacation.karoshi.com

unread,
Oct 8, 2008, 2:36:25 PM10/8/08
to
On Wed, Oct 08, 2008 at 02:03:08PM -0400, Edward Lewis wrote:
> At 17:50 +0000 10/8/08, bman...@vacation.karoshi.com wrote:
>
> > actually, I think there are only three entities in canonical
> >validation.
> > the validator, the publisher(s) e.g. authoritative and caching
> > servers,
> > and the zone key holder(s).
>
> You are counting "publisher(s)" as one when you should be counting it
> as at least 3. That's why you don't get a 5. The root zone manager,
> the tld manager, and the enterprise are three independent
> organizations that may need to be consulted to see why a certain DS
> RR wasn't uploaded in a certain zone.


kind of depends on where you get your TA's.
I still say three.

--bill

bman...@vacation.karoshi.com

unread,
Oct 8, 2008, 2:33:25 PM10/8/08
to
On Thu, Oct 02, 2008 at 08:10:02AM -0700, Nicholas Weaver wrote:
>
> >>The fundamental problem with DNSSEC is how the application USES
> >>getaddrinfo(): it either trusts the name or it doesn't.
> >
> > again, you are conflating resoultion w/ validation.
> > getaddrinfo() returns data ... w/o any warrantees on
> > accuracy or integrity. if you want that, then you have
> > to involke a validator. the "trust" does not lie w/ the systemcall.
>
> I don't get the distinction.
>
> Pardon my ignorance, but the resolution process in DNSSEC should
> include validation. Unless the recursive resolver is told otherwise,
> it validates DNSSEC signatures before returning a value. Unless the
> stub resolver is misconfigured, it validates DNSSEC signatures before
> returning the value.

resolution != validation your assertion that it should
is based on fervant wish, not analytical review.

a resolver is a series of system calls. no state, totally ephemeral.

a "full resolver" or "caching resolver" is a DNS server sitting on
a resolver. in this case, the DNS server has no authoritative data,
it just maintains cache. And in that case, you -might- put a
validator with that DNS server - since you are already committed
to maintain cache/state.

is the distinction clearer?

> getaddrinfo() is a wrapper around a call to the local stub resolver.
> Thus if the stub resolver validates as part of the lookup process,
> getaddrinfo() validates as part of the lookup process. ANd if you
> trust the operating system, you can trust the operating system's stub
> resolver to do this properly.

a stub resolver is, by definition, just the set of system calls.
it can't validate, since there is no local cache.

> >>If its an end-to-end secure application, it never actually trusted
> >>the
> >>name, because the application doesn't trust the network, so DNSSEC
> >>bought nothing. Zippo. Zilch. Nada.
> >
> > unless the application under discussion is DNS.
> > and even then, DNSSEC doesn't care abt trusting the
> > network or the intermediate nodes that might hold the
> > data.
>

> How many end-host lookups are "DNS is the application"? Almost none.

actually - 100% of them. the DNS application will
almost always pass the results of a lookup back to some
other application.

> Actually, trust is transitive all the time. Your application trusts
> the underlying operating system, for example.

let me run that by several of the Security Area directorate.
last I checked, just because the DNS data has not been tampered
with in transit, is no assurance that the target host is clean
or any of the applications running thereon are safe.


> > I belive validation can and should be done asyncronously from
> > resolution. And it is true that a validator API is sadly
> > lacking - although there have been several efforts over the
> > years to bring something forward.
>

> Why should validation be done independently? Why should separate
> validation be done at all?

why not? doing validation independently of resolution may be
required for a number of reasons - for example the endnodes may
be constrained by power, number of transistors, etc for running
a userspace cache. lightweight system calls may be all that fit.

> In fact, I will argue that independent validation is EXACTLY the wrong
> answer for almost all applications:

thats a point of view. not a universally held pov, but
one you can hold.

[prognostications held forth]

> In fact, the big use for DNSSEC, I believe, is NOT for securing DNS
> names against a man-in-the-middle (a useless endeavor), but as a "item
> respository with cryptographic integrity", where NON-naming
> information is stored.

thats not a new idea either - but fraught w/ transient trust
problems.

> Not because its a particularly good one (the size limits make it a
> particularly BAD one in fact), but because there is no other off the
> shelf existing secure distributed database code that is readily
> avaliable, unlike transport .

Yowsa... put everything in the DNS 'cause thats all we got?

> And for this, the API could just be a piece of resolver code as a
> library in the application or a standalone application, like TLS and
> ssh are today for anyone who uses it. And would want to be this way,
> because the desired policies on failures etc should not be relying on
> the end host's stub resolver code anyway.

not sure what this means.

--bill

bman...@vacation.karoshi.com

unread,
Oct 8, 2008, 2:51:38 PM10/8/08
to
On Wed, Oct 08, 2008 at 11:41:44AM -0700, Matthew Dempsky wrote:

> On Wed, Oct 8, 2008 at 10:32 AM, <bman...@vacation.karoshi.com> wrote:
> > what was that Postel credo?
>
> I just tried ssh'ing to ro...@karoshi.com. Why didn't it accept
> "opensesame" as the password? Your server's not being very liberal in
> what it accepts. :-(

well, if you were more conservative in what you sent... :)

bman...@vacation.karoshi.com

unread,
Oct 8, 2008, 4:26:28 PM10/8/08
to
On Wed, Oct 08, 2008 at 11:52:18AM -0700, Nicholas Weaver wrote:

>
> On Oct 8, 2008, at 11:33 AM, bman...@vacation.karoshi.com wrote:
>
> >On Thu, Oct 02, 2008 at 08:10:02AM -0700, Nicholas Weaver wrote:
>
> I'm afraid I just don't get it.
>
> The whole point of DNSSEC is to insure that the object returned by a
> query has an integrity guarentee associated with it, not that you can
> do a separate integrity check. I can't find anything that suggests
> otherwise.

has the ability to have an integrity guarentee. a priora, it
(DNSSEC) does not ensure that a reply to a query has an integrity
guarentee - it simply provides the framework on which such auarentees
can be made.

> >>getaddrinfo() is a wrapper around a call to the local stub resolver.
> >>Thus if the stub resolver validates as part of the lookup process,
> >>getaddrinfo() validates as part of the lookup process. ANd if you
> >>trust the operating system, you can trust the operating system's stub
> >>resolver to do this properly.
> >
> > a stub resolver is, by definition, just the set of system calls.
> > it can't validate, since there is no local cache.
>

> Actually, this is incorrect. Many MANY end system stub resolvers
> maintain cache state. Try it out for yourself with a TCPdump and
> multiple invocations of ping.

then they are not stub resolvers.

> >>How many end-host lookups are "DNS is the application"? Almost none.
> >
> > actually - 100% of them. the DNS application will
> > almost always pass the results of a lookup back to some
> > other application.
>

> Thats exactly my point. DNS is not the application!

the DNS is AN application. a host-lookup is a DNS lookup, e.g.
the DNS application. the results may be used by other applications.

>
> You can attack the final application by attacking the name -> address
> mapping or many other ways. DNS only needs to not provide ADDITIONAL
> ways to attack the final application.

w/o DNSSEC, there are ADDITIONAL ways to attack, regardless of
the target application. you can and should still use other
security methods like TLS or SSL or other application specific
techniques.

>
> This is why I don't believe the MitM is the proper threat model for DNS.

then take it up w/ the authors of the DNS risk/threat RFC.

> Separate validation is USELESS: Apps either won't validate or they
> will. If the app doesn't validate it, it gains no protection from
> DNSSEC.

thats your claim. we'll see how things pan out.

> >
> > thats not a new idea either - but fraught w/ transient trust
> > problems.
>

> Its the same transient trust problem present in TLS's CA structure, or
> any PKI infrastructure. And it doesn't add any MORE transient trust
> problems than any other PKI infrastructure. In fact, in both cases
> you have to trust Verisign.

er, not quite the same. stuffing SSH keys or any other application
key into the DNS presumes that the application owner controls the contents of
the DNS zone... which is not true in most cases. the one case that
the IETF retained was storing x509 certificates in CERT rrs.

> > Yowsa... put everything in the DNS 'cause thats all we got?
>

> No, put it in DNSSEC because its code that works.

that argument has been used before w/ limited efficacy.

> Lesson #1: never reimplement crypto when there is an off-the-shelf
> solution.

not quite what I learned but close.

) never reimplement crypto that works for your intended use.

** unless you are a crypto person and know what the heck you are doing.

>
> > not sure what this means.
>

> That if you actually DO want this, you include library code that
> interfaces with DNS directly, bypassing gethostbyname() completely,
> because the policy you want for the gray cases is probably not the
> policy the stub resolver wants.

so you are talking about a validator API independent of the
resolver API?

bman...@vacation.karoshi.com

unread,
Oct 8, 2008, 4:47:18 PM10/8/08
to
On Wed, Oct 08, 2008 at 07:01:50PM +0100, Tony Finch wrote:

> On Wed, 8 Oct 2008, bman...@vacation.karoshi.com wrote:
> >
> > sure, validation failure is validation failure... but when and
> > from whom? Ed leaves out transient network failure as areason
> > for validation failure... when validation fails, how often do
> > you retry?
>
> A SERVFAIL because of network breakage is not really a DNSSEC problem.
>
> Tony.

how do you tell?

Dean Anderson

unread,
Oct 8, 2008, 5:04:40 PM10/8/08
to
[ Note: Post was moderated. ]

On Wed, 8 Oct 2008, Brian Dickson wrote:

> However, without DNSSEC or some similar method of protecting the
> name->address mapping, the data protection is at best of limited use.

The above is not true, as I recently explained on the DJBDNS list.

The point is that there is only two categories: One category of attack
that can come from anywhere, and the other category of attack can only
come from the middle. DNS over TCP eliminates the first category, so no
further effort is needed to eliminate that category. TLS, properly
verified, completely eliminates the second category, so no further
effort is needed to eliminate that category. All done. No more changes
to DNS are needed.

> (You really should review Dan Kaminsky's Black Hat presentation, and
> Anton Kapella's Black Hat presentation, to understand why.)

Kaminsky's claims have been discredited; he didn't discover or even
rediscover forgotton attacks. The attack kaminsky describes was
described as recently as 2006 to the IETF, and was pointed out by Dr.
Bernstein in the 1990s when the 'balliwick' changes were being
discussed. The Kaminsky report created "urgency"; A frequent element of
a scam is to have such urgency that one can't stop to think about
whether the claims are legitimate or make sense. Professor Bruce Wedlock
often repeated "Always ask yourself 'Does this make sense?'" Of course,
the attack could have been forgotten and 'rediscovered', but that isn't
the case, either. I found a design report for Unbound (developed by
Nominet, Verisign, NLnet Labs, EP.NET (Bill Manning))
http://www.unbound.net/documentation/ietf67-design-02.pdf in which they
describe that "spoofed NS additionals confuse iterator". This paper was
discussed at IETF 67, in November 2006, so one can't even say that DJB
discovered the attack in the 90's but it was forgotten and rediscovered
by Kaminsky. Kaminsky just created a lot of "urgency", with the result
of scaring people into adopting DNSSEC.

I've recently been discussing a problem Kaminsky and Kevin Day reported
(at blackhat) to exist in DJBDNS. They still have not revealed precisely
what the bug is, months later, and many in the DJBDNS community are
getting fed up with the delay. After criticizing Kaminsky and Day for
failing to talk to the DJBDNS community about the problems, Kaminsky and
Day began to talk a bit more about their mathematical assumptions. To
put it bluntly, their math was complete BS. And after I analyzed the
math a bit (discrediting their analysis), I did discover some few things
that could be smarter in DJBDNS, that apparently they had previously
seen but not reported. They proposed (to me, at least) changes to
DJBDNS to re-use ports. (no patches, just the idea of doing this).
Their proposal has the effect that any port coincidence would then have
multiple possible QIDs instead of exactly one. This actually harms
security, but they don't see that. Instead of

probability of port coincidence * 1/65536 QIDs

as currently a response received must have the correct query, port and
QID, it becomes

probability of port coincidence * probability of QID coincidence.

The CRC handbook of Applied Cryptography gives the formulas needed in
section 2.1.5, Birthday problems. On a regular unix system, the first
1025 ports usually won't be assigned to a socket, leaving 64510 ports
available for use. The math for the first case:

model B (2 urns, no replacement)
p4(64510, 200, 200)
exact formula: 1 - (perm(m, n1+n2) / (perm(m, n1) * perm(m, n2))
simplified (n1 = n2): 1 - (perm(m, 2n) / (perm(m,n) ^ 2))
(%i48) float( 1 - ( 64510! / ( 64510 - 400)! ) / ( ( 64510! / ( 64510 -
200)! ) ^ 2 ) );
(%o48) 0.46312147260386
(%i49) % * (1/65536);
(%o49) 7.066672860776713E-6
(%i50) 1 / %;
(%o50) 141509.3099258154
(%i51) 200 * %;
(%o51) 2.830186198516307E+7


So, as is, one must repeat 141509 times before success, expect to send
28,301,862 packets against DJBDNS. Yet Kaminsky asserts the birthday
attack is carried out in

65536 * 65536 / 200 = ~21474837 packets.

While within an order of magnitude, Kaminsky calculates and incorrect
number and uses an incorrect formala for a birthday attack.

After the Kaminsky/Day proposed changes to DJBDNS (and possibly other
DNS servers), there would be two birthday attacks that combine resulting
in:

model B (2 urns, no replacement) (port coincidence as above)
p4(64510, 200, 200)
exact formula: 1 - (perm(m, n1+n2) / (perm(m, n1) * perm(m, n2))
simplified (n1 = n2): 1 - (perm(m, 2n) / (perm(m,n) ^ 2))
(%i48) float( 1 - ( 64510! / ( 64510 - 400)! ) / ( ( 64510! / ( 64510 - 200)! ) ^ 2 ) );
(%o48) 0.46312147260386

model B (2 urns, no replacement) (QID coincidence)
p4(65536, 200, 200)

(%i73) float( 1 - ( 65536! / ( 65536 - 400)! ) / ( ( 65536! / ( 65536 -
200)! ) ^ 2 ) );
(%o73) 0.4578519611075
(%i75) %o48 * %o73;
(%o75) 0.21204107446267
(%i76) 1 / %;
(%o76) 4.716067405968776
(%i77) 200 * %;
(%o77) 943.2134811937551


So, if you have 200 outstanding ports, with 200 outstanding QIDs, and
any correct port can have a coincidence on the 200 QIDs, the cache is
spoofed in just under under 5 tries, or in about 943 packets!!! Holy
added-security flaws!

> I don't believe any reasonable advocates of DNSSEC are also anti-SSL/TLS.
>
> DNS without DNSSEC *is* an additional way to attack the final application.

DNS *with* DNSSEC is an additional way to attack the final application.

DNSSEC caches can still be poisoned by setting the CD bit so that the
cache does not verify the data, then spoofing by ordinary means with the
appropriatly set DO and CD bits. The cache won't verify the data, and so
will cache invalid data that it hands to the stub resolver for
verification. The resolver that tries to verify the data will reject the
invalid signature, but can't get past the cache with the bad data.
Result: DOS.

And of course, don't forget that spoofed DNSSEC queries of TLDs or
significant domains can result in an upto 8KB response that can't be
easily mitigated.

> > This is why I don't believe the MitM is the proper threat model for DNS.
>

> See the above. MitM protection is necessary, but not sufficient, to
> provide security for any system that relies on DNS, e.g., what most
> people think of when they refer to "The Internet".

DNSSEC does not protect an application against a MITM attack. Even
getting the right domain name, if the application does not use TLS or
if the application does not properly verify the certificates, it is
still vulnerable to a MITM attack.

If the application gets bad domain data, TLS and proper verification
will detect this. DNS 'security' is not necessary to avoid MITM attacks.


There are also some other crypto problems with DNSSEC, in that they
didn't follow the recommendations for adding a random salt before the
signatures, and so are vulnerable to low encryption coefficient attack,
etc.

--Dean


--
Av8 Internet Prepared to pay a premium for better service?
www.av8.net faster, more reliable, better service
617 344 9000

bman...@vacation.karoshi.com

unread,
Oct 9, 2008, 10:51:52 AM10/9/08
to
On Thu, Oct 02, 2008 at 03:47:35PM +0200, Shane Kerr wrote:
> Bill,

>
> On Thu, 2008-10-02 at 12:37 +0000, bman...@vacation.karoshi.com wrote:
> > its not that hard - many folks are signing their data.
> > i remain unconvinced that stub resolvers need to be updated.
> > I belive validation can and should be done asyncronously from
> > resolution. And it is true that a validator API is sadly
> > lacking - although there have been several efforts over the
> > years to bring something forward.
>
> Can you point us to some of these?

this might be what you are looking for.

http://www.rs.net/rva/index.html

--bill

>
> Too bad the IETF does not "do APIs". (Except when it does, of course.)
>
> --
> Shane

Dean Anderson

unread,
Oct 9, 2008, 3:47:48 PM10/9/08
to
[ Note: Post was moderated. ]

On Thu, 9 Oct 2008, Wouter Wijngaards wrote:

>
> The thing that Kaminsky added was the retry for a different (previously
> unseen) query name, which delivers the race-to-win and no wait for TTLs.
> The rest is indeed the same. The race-to-win made the problem very urgent.

I don't think Kaminsky added this. This 'race-to-win' was only remotely
relevant after NXDOMAIN was added in 1998, after the balliwick issue.
When the balliwick issue was being discussed, queries for non-existant
names were not cached. I don't recall anyone ever proposing TTL as a
solution to spoofing attacks, so I don't think it can be a big surprise
that it wasn't a valid security assumption protecting one from spoofing
attacks. How could it be surprising when no one ever said that?

NXDOMAIN was a offered as a performance enhancement, not a security fix.
(see the abstract of RFC2308). Indeed, RFC2308 security considerations
section actually describes spoofing attacks using NXDOMAIN.

There is nothing 'urgent' in what Kaminsky describes, because there is
nothing new or novel in what Kaminsky describes. Not even the
'race-to-win'.

> > So, if you have 200 outstanding ports, with 200 outstanding QIDs,
> > and any correct port can have a coincidence on the 200 QIDs, the
> > cache is spoofed in just under under 5 tries, or in about 943
> > packets!!! Holy added-security flaws!
>

> Are you saying that using multiple ports at the same time creates
> birthday attack problems?

Yes. But that isn't the big issue. It requires about 28 million packets
to be successful, and there are some things I have proposed to make this
even harder (see posts on d...@list.cr.yp.to). But Kaminsky's idea makes
it trivially easy. I haven't checked to see if these ideas were included
in the 'urgent' patches.

The way DJBDNS (DNScache) works now, a new random port is allocated for
each recursive query, a random QID is selected, and the query is sent.
Packets recieved on that port are checked by a function callled
'irrelevant()', which ensures that the QID and the query are the same.

What was proposed was to reuse ports; that is one port may have multiple
outstanding query ids. To implement that, one would have to keep track
of all the valid outstanding query ids (probably over all ports), and
check any incoming packet on a valid port to see if its query id matches
one of the valid outstanding query ids. So one may also have a
coincidence over the query ids. The combination of two coincidences
drastically reduce the strength of using random port numbers and random
QID, as the math shows.

> Or do you assume in the above that this is the same query that is sent
> out with multiple outstanding ports/multiple outstanding QIDs?

If you reuse the port, presumably, it is for different queries; but the
same query could be repeated by different clients. So all outstanding
queries could be arranged to be the same textual query. Follow?

> > DNS *with* DNSSEC is an additional way to attack the final application.
> >
> > DNSSEC caches can still be poisoned by setting the CD bit so that the
> > cache does not verify the data, then spoofing by ordinary means with the
> > appropriatly set DO and CD bits. The cache won't verify the data, and so
> > will cache invalid data that it hands to the stub resolver for
> > verification. The resolver that tries to verify the data will reject the
> > invalid signature, but can't get past the cache with the bad data.
> > Result: DOS.
>

> Unbound has a cache of 'bad' data. This is data that it considers
> bogus. This data can be retrieved by setting the CD bit, that is the
> purpose of the CD bit. This data is not returned to regular stub
> resolvers (withholding the bad data). The stub resolver can then choose
> via the CD bit to verify that data itself, with its own security policy.

I don't think this is completely correct. If the site intends to offload
verification on the stub resolver, then the stub has to set the CD bit.
Such cache setups can be poisoned.

I seem to recall (but haven't verified in the RFCs), that when
unverified data is cached in the 'bad cache', and the stub doesn't set
the CD bit, that the cache is supposed to return an error in that case.

And of course, if the cache does verify, it is vulnerable to attacks in
which many hard to verify records are requested at a high rate.

And of course, I didn't mention before that in either case,
cryptographically valid (but replaced) records can be used to poison the
cache, too. Indeed, in some cases, (specially nsec with unverified
delegations), spoofed delegations can be _created_ in secure zones using
the replaced (but still crypto valid) NSEC records. The stub resolver
will conclude these are valid records.

--Dean

--
Av8 Internet Prepared to pay a premium for better service?
www.av8.net faster, more reliable, better service
617 344 9000

--

0 new messages