Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

DNS rebinding: prevention?

69 views
Skip to first unread message

Mordechai T. Abzug

unread,
Aug 3, 2007, 12:10:17 PM8/3/07
to
Is there a way to get bind in caching mode to prevent DNS answers from
external DNS servers that include RR rdata with internal IPs and
internal hostnames? [Question originally asked on dc-sage by Peter
Watkins.]

This would be to prevent DNS rebinding. Information about DNS
rebinding:

http://www.hackszine.com/blog/archive/2007/08/dns_rebinding_how_an_attacker.html
http://crypto.stanford.edu/dns/

If this is not a feature of bind today, can this be added?

Note that there would probably need to be an exception mechanism to
deal with known glue records, delegations to other servers, and other
known valid third-party RRs that point to internal names and IPs.

["match-destinations" has a promising name, but seems to be for DNS
server's own IPs, not for RR rdata.]

- Morty


Chris Buxton

unread,
Aug 3, 2007, 12:50:28 PM8/3/07
to
named would have to check the address of each A or AAAA record coming
from the outside to see if it refers to an internal address. I don't
believe any name server can do this currently. This seems to be more
a job for an application-level firewall that can fully inspect the
contents of DNS messages and filter based on their contents.

Chris Buxton
Men & Mice

Barry Margolin

unread,
Aug 3, 2007, 8:57:11 PM8/3/07
to
In article <f8vmku$16bv$1...@sf1.isc.org>,
Chris Buxton <cbu...@menandmice.com> wrote:

> named would have to check the address of each A or AAAA record coming
> from the outside to see if it refers to an internal address. I don't
> believe any name server can do this currently. This seems to be more
> a job for an application-level firewall that can fully inspect the
> contents of DNS messages and filter based on their contents.

Indeed, the DNSD component of high-end Symantec firewalls (SGS
appliances and SEF software) does this by default.

>
> Chris Buxton
> Men & Mice
>
> On Aug 3, 2007, at 9:10 AM, Mordechai T. Abzug wrote:
>
> > Is there a way to get bind in caching mode to prevent DNS answers from
> > external DNS servers that include RR rdata with internal IPs and
> > internal hostnames? [Question originally asked on dc-sage by Peter
> > Watkins.]
> >
> > This would be to prevent DNS rebinding. Information about DNS
> > rebinding:
> >
> > http://www.hackszine.com/blog/archive/2007/08/
> > dns_rebinding_how_an_attacker.html
> > http://crypto.stanford.edu/dns/
> >
> > If this is not a feature of bind today, can this be added?
> >
> > Note that there would probably need to be an exception mechanism to
> > deal with known glue records, delegations to other servers, and other
> > known valid third-party RRs that point to internal names and IPs.
> >
> > ["match-destinations" has a promising name, but seems to be for DNS
> > server's own IPs, not for RR rdata.]
> >
> > - Morty
> >
> >

--
Barry Margolin, bar...@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***


Mordechai T. Abzug

unread,
Aug 4, 2007, 12:40:27 AM8/4/07
to
[resending from subscribed address]

On Fri, Aug 03, 2007 at 09:50:28AM -0700, Chris Buxton wrote:

> named would have to check the address of each A or AAAA record
> coming from the outside to see if it refers to an internal address.

Yes. And CNAMEs, too. Maybe NS records, SRVs, MXs, and some other
record types I'm not thinking of. Which is OK -- bind already looks
at the records at least a little bit, i.e. to cache them, to see if
they match the query, etc. IME, bind runs at low CPU utilization on
modern hardware for 10K users: there's definitely room for more work
to be done by bind. And, of course, like any other feature, this
should be able to be turned off for performance reasons or any other
reason. "Any feature that cannot be turned off is indistinguishable
from a bug."

> This seems to be more a job for an application-level firewall that
> can fully inspect the contents of DNS messages and filter based on
> their contents.

The DNS server is already parsing DNS replies and looking at them, to
make sure that the query IDs match, that the answer is valid based on
the query, to cache, etc. The DNS server is the expert on DNS. Why
pass the buck to the firewall?

Also, for large shops, where one hand doesn't know what the other is
doing and there is a lot of specialization, I don't think you want the
firewalls guys responsible for understanding DNS configurations.
There are a lot of subtleties here -- offhand, delegations to the
server, delegations from the server, and known-valid third-party DNS
records that point to internal IPs or names.

And quite aside from organizational issues, I personally work with
both DNS products and firewall products. I would trust a DNS server
to do complicated things with DNS a lot more than I would trust a
firewall to do complicated things with DNS. The DNS servers
(i.e. bind) have a lot more DNS-related knobs to turn, and clearly
understand DNS better.

In summary: yes, this can be worked around in a firewall, but it makes
a lot of sense to provide a workaround in bind.

- Morty


Kevin Darcy

unread,
Aug 7, 2007, 12:33:32 AM8/7/07
to
I tend to agree. Trying to teach a firewall about all of the nuances of
parsing DNS replies is basically reinventing the wheel. We already have
a capable DNS-response-parser in the resolver, why not use it?

If a feature like this were implemented in BIND, though, I think the
"safe" thing to do, from a security standpoint, is to give a REFUSED
response if it were to otherwise give "forbidden" answers (address
records that can be legally dropped, e.g. from the Additional Section in
most circumstances, should simply be dropped). No way should BIND or any
other nameserver implementation give *partial* RRsets in responses, or a
NODATA response instead of an answerful response, just because the
administrator declared that certain addresses were "bad". While this is
"safe" from a security standpoint, it could easily break applications
and/or subsystems, turning otherwise-successful lookups into
irretrievably-failed ones. The documentation would need to clearly state
USE THIS WITH EXTREME CAUTION.

In my personal opinion, this "rebinding" problem is really the fault of
the crappy browser security model, not DNS, _per_se_, which has never
presumed that any address record is going to persist for any arbitrary
length of time. There's no "pinning" in DNS, never has been. So, any
such feature by BIND or other DNS implementations just buys some time
until the Security geniuses can redesign their deficient model.

- Kevin

Mark Andrews

unread,
Aug 7, 2007, 1:00:43 AM8/7/07
to

It's also not as straight forward as people seem to think.

You would need lots of exception processing which would be
a combination of name and/or address and/or tsig pairs.
All of this to "fix" a flawed security model.

Mark
--
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: Mark_A...@isc.org


Mordechai T. Abzug

unread,
Aug 7, 2007, 3:46:04 AM8/7/07
to
On Tue, Aug 07, 2007 at 03:00:43PM +1000, Mark Andrews wrote:

> It's also not as straight forward as people seem to think.

> You would need lots of exception processing which would be
> a combination of name and/or address and/or tsig pairs.

Both of my posts so far have said that an exception mechanism would be
needed. That said, bind already has a lot of ACL-related plumbing.
One way to implement this would be a global ACL option like
"allow-external-pointing-to-this" that defaults to "any". Each zone
also takes a (possible) "allow-external-pointing-to-this" ACL that
overrides the global option. Each "allow-external-pointing-to-this"
ACL is a regular bind ACL with addresses, TSIGs, or whatever.

If there is a known external name without a unique external bind ACL
(i.e. address and/or TSIG) that identifies, and there is no existing
zone statement for it, and you want to consider it internal, create a
zone stanza for that zone that is a "type forward" and repeats the
global forwarding policy. It may even be desirable to create a new
zone type for "inherit", which is semantically equivalent to "type
forward" with the same forwarding policy as the global forwarding
options, just so one can set other options (such as
"allow-external-pointing-to-this") more easily. [I'd like "type
inherit" in general!]

Then, for all incoming queries:

(0) perform traditional security checks before proceeding. If they
pass, proceed to next step.

(1) Can the query be answered from the authoritative zones or the
cache? If yes, "allow-external-pointing-to-this" does not matter,
and follow traditional rules. Otherwise, proceed to next step.

(2) Does the queried hostname belong to a configured "zone" statement?
If yes, follow traditional DNS rules. Otherwise, proceed to next
step.

(3) get an answer following traditional rules, and when the answer
comes back, check to see if relevant rdata response fields for the
given record type (i.e. A, NS, AAAA, CNAME, PTR, MX, SRV, etc.)
match configured forward or reverse zone. It not, it's OK, and
return the response. Otherwise, proceed to next step.

(4) What IP responded? Check "allow-external-pointing-to-this" ACL
for the zone that is pointed at, or if it doesn't exist, the
global "allow-external-pointing-to-this" option. If allowed,
respond normally. Otherwise, give an appropriate error response
to the requestor.

In a forward-only architecture where the upstream servers are trusted
to hold some "internal" DNS data, the downstream servers should not
bother to implement "allow-external-pointing-to-this", and the
upstream servers should implement "allow-external-pointing-to-this".
In a forward-only architecture where the forwarding server is entirely
external, the "allow-external-pointing-to-this" has to be implemented
on the downstream server.

In a "view" environment, each view should consider its collection of
zones to be independent of other zone collections. So if view A
contains zones X and Y, and view B contains zone Z, it's OK from A's
perspective for X to point to Y, but not for Z to point to Y.

I don't think this will be trivial, but it doesn't seem that complex,
either. Then again, I don't hack bind code. ;)

Note that this assumes that all zones on a cache are "internal" zones.
This is probably a valid view for corporations, government agencies,
small businesses, non-profits, and most other organizations. Service
providers running DNS servers for multiple entities, each of which
does not trust the other, will have a more complex perspective. The
cleanest way to handle this would be to split the different entities
into separate views -- which is probably this best way to handle this
architecture even without this issue. However, I cannot personally
speak to the service provider environment.

Also note that this doesn't preclude having secondary/slave zones for
peer companies/agencies/organizations where the peer zones are
untrusted. If this runs on the same hardware as the caching DNS, the
externally-visible DNS can be in its own view.

It may also be desirable to have a per-zone option for "external", to
mean that the zone is not trusted to point at other internal zones
even though it is locally configured.

> All of this to "fix" a flawed security model.

In part, yes, the immediate issue is someone else's problem. That
said, it's a lot easier to fix it here, definitively, than continue to
use patchwork solutions elsewhere.

In part, this is a DNS security flaw, too. Why can external entities
point their RRs at my names and IP address space and try other novel
forms of DNS-based attack against my hosts? What is the next problem
that will exploit this? Even aside from this particular issue, my
internal DNS server should be configurable to not trust external
servers that want to say things about my names and my IPs. I know
security people who were bothered by this before they learned about
DNS rebinding. DNS rebinding proves that they were right. This is a
flaw in the DNS specification.

Similarly, if one has a policy (not security at all!) that all
internal IPs must be hosted on organizational DNS, in the current
world, it's technically feasible for "rogue" internal groups to bypass
this policy. The rogues just register external DNS names pointing at
internal servers.

So even if this particular issue gets fixed at the browser, it would
be nice for bind to provide a mechanism to deal with the more general
problem.

- Morty


Mordechai T. Abzug

unread,
Aug 7, 2007, 9:25:26 AM8/7/07
to
On Tue, Aug 07, 2007 at 02:24:50PM +0200, Ralf Weber wrote:

> What if everybody would use proper reverse entries that also had the
> corresponding forward entries and all that secured via DNSSEC? Then
> if the browser would see a difference between forward and reverse
> mapping it should not allow the connection.

That requires a whole lot more work than just making some zone-level
config changes. And the transition isn't clean -- if forward and
reverse DNS don't match, how does a browser know if this is because
the admin hasn't yet gotten around of making them match, or because
there really is a problem? And how do you deal with name-based
virtual hosting, where you might have dozens or even hundreds of
hostnames parked at one IP? And how do you deal with the *next*
vulnerability that happened because the protocol designers didn't
understand this DNS issue?

> Well what is your address space? There are several reasons why names
> may point anywhere.

From my perspective, any addresses that I have defined as in-addr.arpa
zones are the address spaces I want to protect. If worst comes to
worst, I would even happily list out a collection of CIDR
address/netmask pairs that comprise the address space I want to
protect.

> DNS just is a protocol not a policy. This is not an DNS security
> flaw IMHO - it just is a feature.

A DNS server implementation implements both protocol and policy.
That's why BIND has configuration options such as allow-query,
allow-recursion, allow-transfer, etc. That's policy stuff, but it
goes in the server. Ideally, RFCs should recommend that this be done.

In different terms -- traditional DNS security issues involve
questions of whether a DNS request has been answered by an acceptable
server or asked by an acceptable client or peer. The question now is,
even if a DNS request has been answered by an acceptable server, is
the answer itself acceptable? This is an obvious and logical
extension of existing security/policy issues that DNS servers such as
BIND address.

- Morty


Ralf Weber

unread,
Aug 7, 2007, 8:24:50 AM8/7/07
to
Moin!

On 07.08.2007, at 09:46, Mordechai T. Abzug wrote:

> On Tue, Aug 07, 2007 at 03:00:43PM +1000, Mark Andrews wrote:
>
>> All of this to "fix" a flawed security model.
>
> In part, yes, the immediate issue is someone else's problem. That
> said, it's a lot easier to fix it here, definitively, than continue to
> use patchwork solutions elsewhere.

Well the ultimate fault lies as Mark and others said is in the browser.
However rather then overloading DNS server with functions unrelated to
DNS, we could use the technology available to us to solve this.

What if everybody would use proper reverse entries that also had the
corresponding forward entries and all that secured via DNSSEC? Then
if the browser would see a difference between forward and reverse
mapping it should not allow the connection.

I know that especially when looking at the discussions in the dnsop
wg that this will not happen any time soon now, but as said I'd rather
use existing proven technology instead of adding yet another feature
that might cause yet another bug.

> In part, this is a DNS security flaw, too. Why can external entities
> point their RRs at my names and IP address space and try other novel
> forms of DNS-based attack against my hosts? What is the next problem
> that will exploit this?

Well what is your address space? There are several reasons why names may

point anywhere. DNS just is a protocol not a policy. This is not an DNS


security flaw IMHO - it just is a feature.

So long
-Ralf


Chris Buxton

unread,
Aug 7, 2007, 9:26:40 PM8/7/07
to
Mark,

Please explain where you think the flaw is.

Is it the job of the web browser to understand what IP space is
private and what is public? Is it the job of the browser to create
its own DNS cache? Isn't it valuable that a web browser be responsive
to changes in DNS?

Web servers should always use name-based virtual hosts, with no
default site, in order to somewhat prevent the attack (but the
attacker can still use this method to find out what IP addresses have
web servers, or other open ports). But there is a plethora of reasons
why this is often not done. And FTP service doesn't have the concept
at all, but the attack can work just as well against FTP as HTTP.

In order to protect private data which should be freely available
within an intranet but completely hidden from the outside, without
crippling the browser's ability to adapt to DNS changes, one of the
following must be done, as I see it:

- The browser (on every client machine) must know what domains should
map to internal addresses, and should know what the private IP space
looks like.

- The resolving name server must be prevented from returning private,
internal addresses in A records whose names are not trusted.

It seems to me that the following are true:

- Any layer 7 firewall that understands and inspects DNS messages
should easily be extensible to have a list of internal subnets and
filter responses based on that list. For example, if no outside name
servers should know anything about the internal subnets, drop any
inbound response (or outbound response, for that matter) that
contains such an address. Or if there are external servers that
should be able to return such addresses, special-case them in an
exceptions list; this option must be used with care.

- To help protect the private subnets of those organizations without
a layer 7 firewall, the resolving name server (BIND, dnsmasq, etc.)
should be configurable with a list of private subnets and the
authoritative servers that should be allowed to return answers
containing addresses from those subnets. Perhaps something involving
a default behavior in the case of conditional forwarding would make
this easier to configure, or perhaps it would be simplest to ignore
that. I foresee an option substatement named "private-subnets" that
takes an ACL, plus a new server substatement named "allow-private-
subnets".

I really don't see this as being much more difficult for named to
process than sorting responses by network topology, or the zone type
"delegation-only". And for a large organization that uses global
forwarding from small resolvers to big forwarders, it's something
that can be set up on the big forwarders on the edges of the network.

Chris Buxton
Men & Mice

On Aug 6, 2007, at 10:00 PM, Mark Andrews wrote:

>
> It's also not as straight forward as people seem to think.
>
> You would need lots of exception processing which would be
> a combination of name and/or address and/or tsig pairs.

> All of this to "fix" a flawed security model.
>

Mark Andrews

unread,
Aug 7, 2007, 9:39:39 PM8/7/07
to

> On Tue, Aug 07, 2007 at 03:00:43PM +1000, Mark Andrews wrote:
>
> > It's also not as straight forward as people seem to think.
>
> > You would need lots of exception processing which would be
> > a combination of name and/or address and/or tsig pairs.
>
> Both of my posts so far have said that an exception mechanism would be
> needed. That said, bind already has a lot of ACL-related plumbing.
> One way to implement this would be a global ACL option like
> "allow-external-pointing-to-this" that defaults to "any". Each zone
> also takes a (possible) "allow-external-pointing-to-this" ACL that
> overrides the global option. Each "allow-external-pointing-to-this"
> ACL is a regular bind ACL with addresses, TSIGs, or whatever.

There is a big difference in controlling queries vs controlling
what is put into the cache.

The obvious one would be a border cache where you wouldn't accept
internal address from external nameservers. With something like
this sort-list synatax.

allow-cache {
// allow internal from internal
{ { range; }; { range; } };
// disallow internal from external
{ { range; }; { none; }; };
// allow everything else from anywhere
{ { any; }; { any; }; };
};

However named really has no way to know if the source address of
the packet is valid. So what we want is the destination address
of the packet as well which isn't available in basic socket API.
IPv6 supports retrieving the destination address in the advanced
API.

We can fake it in IPv4 if we use a per interface socket and a
wildcard socket. You send on the wild card socket and receive
on the per-interface sockets, otherwise you need the entire
routing topology inside named.

Anti-spoofing firewall rules can also help some but not all
of the time.

allow-cache {
// allow internal from internal over internal
{ { range; }; { range; }; { range; } };
// disallow internal from external
{ { range; }; { none; }; { any; }; };
// allow everything else from anywhere
{ { any; }; { any; }; { any; }; };
};

Next people will want to add in namespaces.

allow-cache "example.net" {
// allow internal from internal over internal
{ { range; }; { range; }; { range; }; };
// disallow internal from external
{ { range; }; { none; }; { any; }; };
// allow everything else from anywhere
{ { any; }; { any; }; { any; }; };
};

Then they will want to use nameserver names rather than addresses.

allow-cache "example.net" {
{
{ range; };
{ server ns1.example.net; server ns2.example.net; };
{ range; };
};
{ { range; }; { none; }; { any; }; };
{ { any; }; { any; }; { any; }; };
};

Now we have to cope with glue from the parents.

....

As I said it gets complicated very fast.

Ralf Weber

unread,
Aug 7, 2007, 11:58:07 PM8/7/07
to
Moin!

On 07.08.2007, at 15:25, Mordechai T. Abzug wrote:

> On Tue, Aug 07, 2007 at 02:24:50PM +0200, Ralf Weber wrote:
>

>> What if everybody would use proper reverse entries that also had the
>> corresponding forward entries and all that secured via DNSSEC? Then
>> if the browser would see a difference between forward and reverse
>> mapping it should not allow the connection.
>

> That requires a whole lot more work than just making some zone-level
> config changes.

I said that I don't see it happen any time soon, however I doubt that
your solution is done by only some config changes, it at least requires
some code changes to a name server software.

> And the transition isn't clean -- if forward and
> reverse DNS don't match, how does a browser know if this is because
> the admin hasn't yet gotten around of making them match, or because
> there really is a problem?

Well how do you deal with fools ;-). If someone want's to use
javascript,
flash or other technologies they should be able to configure the
foundations.

> And how do you deal with name-based
> virtual hosting, where you might have dozens or even hundreds of
> hostnames parked at one IP?

Multiple PTR records. DNS can today answer with big udp packets or fall
back to tcp.

> And how do you deal with the *next*
> vulnerability that happened because the protocol designers didn't
> understand this DNS issue?

As said it isn't an DNS issue. The issue is with the protocol
designers. The next vulnerability may be also in the code that was
needed introduce that feature.

> From my perspective, any addresses that I have defined as in-addr.arpa
> zones are the address spaces I want to protect. If worst comes to
> worst, I would even happily list out a collection of CIDR
> address/netmask pairs that comprise the address space I want to
> protect.

Well so you are running an server that works as both an authoriative
server and an iterative resolver, while this may be common in an
enterprise environment, it is not in a service provider environment.
A service provider may have two customers where a web site is
transferred between them while it also may be the one customer
attacking another. How do you judge which is which?

So long
-Ralf


Dawn Connelly

unread,
Aug 8, 2007, 1:55:05 AM8/8/07
to
Just out of curiosity... did you happen to go to a lecture or two at DefCon
this year? There were two lectures about this exact topic over the weekend.
The moral of both lectures is that this is a bad behavior within browsers.
Our dear DNS friend Dan Kaminsky gave a lecture titled "Black Ops 2007:
Design Reviewing The Web". David Byrne gave a lecture "Intranet Invasion
With Anti-DNS Pinning." While Kaminsky and Byrne gave slightly different
versions- it's basically the same attack. And it seemed to me that both came
to the same conclusion that it needs to be addressed in the browser. You
might be interested in googling up these presentations if you didn't catch
them in Vegas.
</end unsolicited 2 cents>

Stephane Bortzmeyer

unread,
Aug 8, 2007, 4:51:20 AM8/8/07
to
On Tue, Aug 07, 2007 at 10:55:05PM -0700,
Dawn Connelly <dawn.c...@gmail.com> wrote
a message of 72 lines which said:

> The moral of both lectures is that this is a bad behavior within
> browsers.

Is there somewhere a text describing "good practices" for Web
browsers? Because the half-baked advices I've read in papers like
"Protecting Browsers from DNS Rebinding Attacks"
(http://crypto.stanford.edu/dns/dns-rebinding.pdf) do not seem
perfectly reviewed (the mention of "class C" awakens the pedant in my
soul).

Everyone seems to say that it's browser's fault, but is there some set
of written rules that the browser's authors should have followed? For
instance, do we endorse pinning, which is a violation of the DNS
standard and its rules about the TTL?

Mordechai T. Abzug

unread,
Aug 8, 2007, 5:24:47 AM8/8/07
to
On Wed, Aug 08, 2007 at 11:39:39AM +1000, Mark Andrews wrote:

> There is a big difference in controlling queries vs controlling
> what is put into the cache.

Why? You are actually controlling queries in both cases. The only
difference is at what stage of the query process we are looking.

The normal model for recursive queries is:

(1) process client recursive query

(2) Check permissions

(3) If the query can be answered from cache or from local
authoritative zones, answer it

(4) Otherwise, go out and find out the external information on behalf
of the client

(5)

(6) insert external response into cache

(7) respond to client

What I am proposing is to fill in step 5 with a check of the response.
It's that simple.

> allow-cache {
> // allow internal from internal
> { { range; }; { range; } };
> // disallow internal from external
> { { range; }; { none; }; };
> // allow everything else from anywhere
> { { any; }; { any; }; };
> };

That adds a whole lot of unnecessary complexity, and it doesn't even
begin to deal with CNAMEs. Remember, we don't need to enumerate what
to protect, because we (should) already have authoritative DNS zones
that contain the stuff to protect. So the config mechanism should be
as simple as:

acl externals_that_legitimately_point_to_me {
10.0.0.45; // or whatever
172.16.1.1; // or whatever
};

options {
// disable others pointing to my zones by default
allow-external-pointing-to-this { none; };

// other options elided
};

view "default" {
zone "1.168.192.in-addr.arpa" {
type master;
allow-transfer { can_axfr; };
allow-query { any; };
file "zone/192.168.1";
// no externals can legimiately point at this domain,
// so no allow-external-pointing-to-this required here
};
zone "2.168.192.in-addr.arpa" {
type master;
allow-transfer { can_axfr; };
allow-query { any; };
file "zone/192.168.2";

// there are some external zones hosted at a couple of
// known IPs that have A records to this zone. Let's
// allow them to do so, using an ACL defined earlier
allow-external-pointing-to-this {
externals_that_legitimately_point_to_me;
};
};
zone "example.com" {
type master;
allow-transfer { can_axfr; };
allow-query { any; };
file "zone/example.com";
// no externals can legimiately point at this domain,
// so no allow-external-pointing-to-this required here
};
zone "example2.com" {
type master;
allow-transfer { can_axfr; };
allow-query { any; };
file "zone/example2.com";

// there are some external zones hosted at a couple of
// known IPs that have CNAMEs to this zone. Let's allow them
// to do so, using an ACL defined earlier
allow-external-pointing-to-this {
externals_that_legitimately_point_to_me;
};
};
zone "friend.com" {
type slave;
file "zone/friend.com";
allow-query { any; };
masters { 4.7.57.48 ; };

// this isn't really my zone, it's a secondary for
// a friend's company, so I don't have any
// expectations about other entities pointing at it,
// and there are no security implications to allowing this
// from my own company's perspective
allow-external-pointing-to-this { any; };

// Note that I probably shouldn't even have this zone
// here at all. It should be in a separate nameserver or
// view that is used for the DNS hierarchy, rather than
// as part of my caching config.
};
// etc
};

Note that we only needed one new syntactic element:
allow-external-pointing-to-this, which takes a BIND ACL. Also note
that if we are really going to do this properly, it makes sense to
fully illustrate the caching DNS component vs. the hierarchical DNS
component; I included both above in a single config, which is not
really correct.

> However named really has no way to know if the source address of
> the packet is valid. So what we want is the destination address
> of the packet as well which isn't available in basic socket API.

<snip>

Why is the destination address of the packet coming into play? That
doesn't matter at all. And the validity of source addresses is no
worse of a problem than it is for DNS in general.

The basic mechanism here is simple: a DNS response from anywhere
should not contain internal DNS names or internal DNS IPs in the DNS
response portion of the packet. In the general case (i.e. no
exception mechanism), it doesn't matter what the source IP or
destination IP of the response is -- so long as it didn't come from my
authoritative information, then it came from *somewhere* outside, and
I don't want to cache it or pass it on to clients. It's that easy.

In the event that an exception mechanism is configured -- either by
server IP/netmask or by TSIG -- then recognizing the source is no
worse than the general problem of DNS spoofing. If TSIG is
configured, then DNS spoofing is not a problem at all; if TSIG is not
configured, then DNS spoofing is no more of a problem than it is for
any non-DNSSEC query. This is why DNSSEC was invented.

IMHO, spoofing is actually less of a problem than in the general case,
since unlike the normal case where the spoofer can easily determine
what the correct nameserver should be and try to spoof its IP, here
the spoofer must guess an IP that would be in an exception ACL, which
may or may not exist, and requires knowing offhand of the existence of
third-party records pointing at the domain.

> IPv6 supports retrieving the destination address in the advanced
> API.

Again, I fail to see what the packet destination address has to do
with anything. Please explain this.

> Next people will want to add in namespaces.

[snip]

As per my previous post and as per the above, the simplest way to
handle namespaces is to to just use the existing "zone" statement
already present and basic to DNS.

> Then they will want to use nameserver names rather than addresses.

There are already named ACLs in bind which do a fine job with this.

> Now we have to cope with glue from the parents.

I originally thought this as well, and included "known glue records"
as a necessary exception in my original post. Then I gave it more
thought, and realized that you don't need to worry about glue, for a
very simple reason -- a DNS server doesn't walk the hierarchy for its
own zones because it's authoritative for them. So one does not need
to worry about delegations to one's own zones.

- Morty


Niall O'Reilly

unread,
Aug 8, 2007, 5:43:48 AM8/8/07
to
On 8 Aug 2007, at 02:26, Chris Buxton wrote:

> In order to protect private data which should be freely available
> within an intranet but completely hidden from the outside, without
> crippling the browser's ability to adapt to DNS changes, one of the
> following must be done, as I see it:
>
> - The browser (on every client machine) must know what domains should
> map to internal addresses, and should know what the private IP space
> looks like.
>
> - The resolving name server must be prevented from returning private,
> internal addresses in A records whose names are not trusted.

I may be missing something, but I expect blocking ports 80
and 443 at the boundary of the intranet and providing a managed
proxy (as opposed to quasi-unmanaged browser-platform machines)
may be useful as a third option.

Of course, this respects layering, and so may be out of scope
for this thread! 8-)


Best regards,

Niall O'Reilly
University College Dublin IT Services

PGP key ID: AE995ED9 (see www.pgp.net)
Fingerprint: 23DC C6DE 8874 2432 2BE0 3905 7987 E48D AE99 5ED9


Mordechai T. Abzug

unread,
Aug 8, 2007, 6:42:34 AM8/8/07
to
On Wed, Aug 08, 2007 at 05:58:07AM +0200, Ralf Weber wrote:

> I said that I don't see it happen any time soon, however I doubt
> that your solution is done by only some config changes, it at least
> requires some code changes to a name server software.

Oops. Yes, sorry, wasn't clear -- both a DNS server config change and
a DNS server code change would be required.

> As said it isn't an DNS issue. The issue is with the protocol
> designers.

I submit that we have an inherently flawed model if I, as a sysadmin,
cannot control my own DNS servers to prevent them from passing
external entities' RRs that point at my own names and IPs. This is an
enabling vulnerability -- it's not a direct problem by itself, but it
takes one other protocol designer that doesn't understand DNS to do
something stupid, and it becomes a problem.

This is actually the second known time that DNS rebinding has been a
problem. And who knows if there aren't other such problems that
haven't been noticed? So are we going to learn from history and fix
this at the DNS layer, or wait until the next problem?

[Note: we can really only fix this for externals pointing to internal
names/IPs, not for externals pointing to third-party names/IPs. So
the proposed solution is a complete fix for part of the problem, but
another part of the problem remains.]

> The next vulnerability may be also in the code that was needed
> introduce that feature.

Yes, there could always be a bug in introduced code, but we don't sit
paralyzed and afraid to introduce new features because of that
possibility.

> Well so you are running an server that works as both an authoriative
> server and an iterative resolver, while this may be common in an
> enterprise environment, it is not in a service provider environment.
> A service provider may have two customers where a web site is
> transferred between them while it also may be the one customer
> attacking another. How do you judge which is which?

I discussed this in a previous email:

http://groups.google.com/group/comp.protocols.dns.bind/msg/2895c1c176e37ca0

Quoting from there:

In a "view" environment, each view should consider its collection of
zones to be independent of other zone collections. So if view A
contains zones X and Y, and view B contains zone Z, it's OK from A's
perspective for X to point to Y, but not for Z to point to Y.

...

Service providers running DNS servers for multiple entities, each of
which does not trust the other, will have a more complex
perspective. The cleanest way to handle this would be to split the
different entities into separate views -- which is probably this
best way to handle this architecture even without this issue.
However, I cannot personally speak to the service provider
environment.

As I said there, I think this can be handled by making the boundary
for external DNS be views, and splitting customers into different
views. However, I personally have never worked in a service provider
environment, so I'm not sure. That said, I don't think that the
difficulties of doing this for service providers should stop us from
doing this for the many other common scenarios such as corporations,
government agencies, non-profits, SOHOs, etc.

- Morty


Mordechai T. Abzug

unread,
Aug 8, 2007, 6:55:25 AM8/8/07
to
On Tue, Aug 07, 2007 at 10:55:05PM -0700, Dawn Connelly wrote:

> Just out of curiosity... did you happen to go to a lecture or two at
> DefCon this year? There were two lectures about this exact topic

> over the weekend. The moral of both lectures is that this is a bad
> behavior within browsers.

There is no argument that it is (also) a bad behavior within browsers.
And it needs to be addressed within browsers even if it is addressed
within DNS, because the DNS component only helps with attacks aimed at
the internal network, not at attacks aimed at external networks.

What *is* being argued: the internal attacks are fundamentally enabled
by a DNS behavior that could/should be controlled within DNS caching
servers. If we fix this, then even if DNS rebinding comes back a
third time in some future form, internal networks will still be safe.
We can fix a whole class of attack against internal networks, right
here and right now. Let's do it!

- Morty


Stephane Bortzmeyer

unread,
Aug 8, 2007, 9:59:17 AM8/8/07
to
On Wed, Aug 08, 2007 at 06:42:34AM -0400,
Mordechai T. Abzug <morty...@frakir.org> wrote
a message of 74 lines which said:

> I submit that we have an inherently flawed model if I, as a
> sysadmin, cannot control my own DNS servers to prevent them from
> passing external entities' RRs that point at my own names and IPs.

This is the model used by the DNS from the beginning. The DNS does not
care about *identity*, it is just a *mapping* between domain names and
values (often IP addresses), without any regard for the semantics of
these values.

Changing this model just because you missed this important point seems
an over-reaction.

> but it takes one other protocol designer that doesn't understand DNS
> to do something stupid, and it becomes a problem.

Indeed but, if we change the DNS to something so radically different
from what it is now, people will make other stupid things with it.



> This is actually the second known time that DNS rebinding has been a
> problem.

And there have been millions of times where it has been useful that
you can direct your domain names to any value you choose.

> [Note: we can really only fix this for externals pointing to internal
> names/IPs, not for externals pointing to third-party names/IPs.

No, you cannot even fix it. On your resolvers, you can (providing you
block outgoing access to port 53, to prevent your users to have their
own resolvers). On the whole Internet, think that we still do not have
proper Internet Routing Registries and you want the DNS to know that
www.frakir.org is not allowed to point to 192.134.4.69?


Shumon Huque

unread,
Aug 8, 2007, 10:48:27 AM8/8/07
to
On Tue, Aug 07, 2007 at 06:26:40PM -0700, Chris Buxton wrote:
> Mark,
>
> Please explain where you think the flaw is.
>
> Is it the job of the web browser to understand what IP space is
> private and what is public? Is it the job of the browser to create
> its own DNS cache? Isn't it valuable that a web browser be responsive
> to changes in DNS?
>
> Web servers should always use name-based virtual hosts, with no
> default site, in order to somewhat prevent the attack (but the
> attacker can still use this method to find out what IP addresses have
> web servers, or other open ports). But there is a plethora of reasons
> why this is often not done. And FTP service doesn't have the concept
> at all, but the attack can work just as well against FTP as HTTP.
>
> In order to protect private data which should be freely available
> within an intranet but completely hidden from the outside, without
> crippling the browser's ability to adapt to DNS changes, one of the

Perhaps this is where the real flaw is. This thread seems to have
focussed on flaws in the browser security model and that is certainly
partly true. But the other flawed security model that is doing even
more to facilitate this threat is the notion that you can install
a perimeter firewall around hosts interacting with the Internet at
large, and then rely on source IP address based authorization to
protect internal resources. How about protecting those resources
with proper strong (cryptographic) authentication instead? That would
appear to solve the problem more generally - since the next time it
will be another application that is similarly tricked and not a
web browser.

--Shumon.


Mordechai T. Abzug

unread,
Aug 8, 2007, 12:44:40 PM8/8/07
to
On Wed, Aug 08, 2007 at 03:59:17PM +0200, Stephane Bortzmeyer wrote:
> On Wed, Aug 08, 2007 at 06:42:34AM -0400,
> Mordechai T. Abzug <morty...@frakir.org> wrote
> a message of 74 lines which said:
>
> > I submit that we have an inherently flawed model if I, as a
> > sysadmin, cannot control my own DNS servers to prevent them from
> > passing external entities' RRs that point at my own names and IPs.
>
> This is the model used by the DNS from the beginning. The DNS does not
> care about *identity*, it is just a *mapping* between domain names and
> values (often IP addresses), without any regard for the semantics of
> these values.

Applications of DNS such as TCP wrappers/libwrap, Apache's "Allow
from" syntax, .rhosts, and even ssh's hostbased authentication are all
real-world examples where DNS is used, at least in part, for identity.
I don't know how DNS was intended to be used, but the real world has
chosen to use it for identity for quite some time.

> > This is actually the second known time that DNS rebinding has been
> > a problem.

> And there have been millions of times where it has been useful that
> you can direct your domain names to any value you choose.

And it will still be possible to direct your domain name to any value
you choose. Only now, a recipient of that value may chose not to
honor it. Which should be the recipients' choice.

> > [Note: we can really only fix this for externals pointing to
> > internal names/IPs, not for externals pointing to third-party
> > names/IPs.

> No, you cannot even fix it. On your resolvers, you can (providing you
> block outgoing access to port 53, to prevent your users to have their
> own resolvers). On the whole Internet, think that we still do not have
> proper Internet Routing Registries and you want the DNS to know that
> www.frakir.org is not allowed to point to 192.134.4.69?

Yes, as was said previously, this would only allow my own DNS servers
to not honor DNS records from outsiders pointing at my network. It
would not help me with the problem of outsiders pointing at
third-party networks, and by extension, it would not help me to
prevent third parties from honoring DNS records from outsiders
pointing at my network.

The paper at http://crypto.stanford.edu/dns/dns-rebinding.pdf, section
5.2, actually does describe a scheme that would fix all three problem
scenarios, would allow for a clean transition and incremental
deployment, and would not require any code or configuration changes to
DNS servers. The basic concept is that each site publishes, in
reverse DNS, records of the form auth.$ip_rev.in-addr.arpa and
$hostname.$ip_rev.in-addr.arpa. So when a browser (or other
application) wants to verify that a hostname is allowed for a given
IP, it looks for auth.$ip_rev.in-addr.arpa, which tells it that the
scheme is enabled, and then for $hostname.$ip_rev.in-addr.arpa, to see
if $hostname is authorized. This allows for a clean transition, since
IPs that haven't yet transitioned won't have the
auth.$ip_rev.in-addr.arpa, and therefore browsers and applications
know that they're not protected yet by this scheme. Anyone thinking
of deploying this scheme? More relevantly, are browsers likely to
honor it anytime soon? I don't see it being deployed quickly, which
is why I'd like to at least have the ability to protect the internal
network.

- Morty


Pete Ehlke

unread,
Aug 8, 2007, 12:56:12 PM8/8/07
to
On Wed Aug 08, 2007 at 12:44:40 -0400, Mordechai T. Abzug wrote:
>
>Applications of DNS such as TCP wrappers/libwrap, Apache's "Allow
>from" syntax, .rhosts, and even ssh's hostbased authentication are all
>real-world examples where DNS is used, at least in part, for identity.
>I don't know how DNS was intended to be used, but the real world has
>chosen to use it for identity for quite some time.
>
And every decent security roadmap ever written tells you to use IP
addresses for libwrap/ssh/allow-from/etc for precisely this reason:
using DNS as an identity service is inherently insecure.


Mordechai T. Abzug

unread,
Aug 8, 2007, 2:00:24 PM8/8/07
to
On Wed, Aug 08, 2007 at 11:56:12AM -0500, Pete Ehlke wrote:

> And every decent security roadmap ever written tells you to use IP
> addresses for libwrap/ssh/allow-from/etc for precisely this reason:
> using DNS as an identity service is inherently insecure.

Every good security document tells you not to trust IP auth, either.
We're all supposed to use strong crypto (i.e. RSA keys for SSH, SSL
client certs for SSL/TLS) and/or two-factor auth instead. Which is
idealistic, but doesn't relfect the real-world reality of a whole lot
of named-based (and IP-based) auth that isn't going anywhere.

I can't change the apps on the network, I can just try to make the
network as secure as possible.

There are apps on many large networks that are horrifically insecure,
but they meet a critical business need, and hey, "we have a firewall",
so someone high up signs off on the risk. The job of the firewall and
infrastructure services to protect such apps from themselves as much
as possible. I can't fix the app, but I might be able to fix DNS.

- Morty


0 new messages