the current DNS prefetch could be improved by a simple extension to HTTP.
Usually the web server knows most or all host names that are referenced by a
weg page long before the page is requested by anyone. Thus it can instantly
deliver the host names and respective IP addresses.
Accessing www.google.de/ might result in these additional HTTP header lines:
X-DNS4: clients1.google.de CNAME clients.l.google.com
X-DNS4: clients.l.google.com A 184.108.40.206
X-DNS4: clients.l.google.com A 220.127.116.11
TTL information should be added.
The traffic increase is minimal, the latency advantage is obvious. For pages
with links to many different host names this is more efficient than client-
side DNS prefetch.
1) This feature requires changes to both browsers and servers. The change on
the browser side is probably trivial. The effort on the server side depends on
the range of applications to be covered. Webmasters to whom performance is
important will implement this even if it takes some effort. But for the
majority of web site owners to use that it must be very easy to implement i.e.
2) If the client has already all host names in its DNS cache then the download
size increased uselessly. But the amount of wasted traffic is irrelevant and
can be reduced by clever decisions which host names are delivered with which
3) This bypasses the OS resolver cache. This is more a design beauty problem
than a practical one. Usually the browser will not use the same host name like
applications for other Internet services.
4) Another theoretical one: For certain (low traffic) domains the amount of
DNS queries might increase a lot.
I see two implementation options on the server side: Either the server
analyses the pages it delivers (just a few times from time to time to see
whether any host names change) or the host name list is stored in a file for
each web page. A demon would refresh the DNS information before it times out.
This does not not work for "virtual" web pages, of course. For systems using
URLs like example.com/?id=123456578&attr1=123&attr2=456 these headers can be
generated from the dynamic pages themselves if that makes sense for that
A nice side effect: By sharing the DNS load between DNS servers and web
servers and above all by having the DNS information spread further (short) DoS
attacks against DNS would become (much) less effective.
On Mar 29, 12:19 pm, Hauke Laging <accou...@hauke-laging.de> wrote:
> the current DNS prefetch could be improved by a simple extension to HTTP.
> Usually the web server knows most or all host names that are referenced by a
> weg page long before the page is requested by anyone. Thus it can instantly
> deliver the host names and respective IP addresses.
> Accessingwww.google.de/might result in these additional HTTP header lines:
The main problem would be whether a web server can be trusted to give
out DNS records. Let's say http://evil.org/ puts up lots of iframes to
login-pages on facebook, banks, etc and sends faked DNS records such
that they get the traffic that the browser beleived it was sending to
the login forms on these banks etc.
It would make Man-in-the-middle attacks much easier, and we've just
barely gotten rid of a similar problem with recursive DNS requests.
Suggestions to avoid this problem? Perhaps it's still useful to send
this as "hints" on which domain names will be needed, and it would be
up to each browser to figure out the balance of security vs speed? It
could perhaps start the connection to the hinted IP address
immediately, and in parallell verify the DNS-to-IP and cancel the
first connection in case it was incorrect.
Load balancing with round-robin and similar one name-many IP:s would
then be cause a performance punishment to browsers who took a hint
that needed to be cancelled, as well as hints given by web servers
with incorrectly configured X-DNS4 hints.
Also, have you seen Google SPDY? A project to extend HTTP in a similar
Oh, by the way -- SPDY is quite far integrated into Chrome already:
> The main problem would be whether a web server can be trusted to give
> out DNS records.
I thought about that and was of the opinion that this does not cause a problem
as an evil web site could link to everywhere anyway. Of course, it makes a
difference whether you click on an evil link or your browser behaviour is
> Let's say http://evil.org/ puts up lots of iframes to
> login-pages on facebook, banks, etc and sends faked DNS records such
> that they get the traffic that the browser beleived it was sending to
> the login forms on these banks etc.
> It would make Man-in-the-middle attacks much easier, and we've just
> barely gotten rid of a similar problem with recursive DNS requests.
> could perhaps start the connection to the hinted IP address
> immediately, and in parallell verify the DNS-to-IP and cancel the
> first connection in case it was incorrect.
Exactly. In paranoia mode you could block the downloaded data for further
processing up to the DNS result.
This would result in more DNS lookups than today (the periodic server-side
requests would be additional, no client requests would be saved). But that
seems acceptable to me.
> Load balancing with round-robin and similar one name-many IP:s would
> then be cause a performance punishment to browsers who took a hint
> that needed to be cancelled, as well as hints given by web servers
> with incorrectly configured X-DNS4 hints.
Problems can be caused by such round robin DNS which gives a subset response
only (like Google). This could be easily addressed by the server-side DNS
resolver, though. If that daemon notices that the looked up IP addresses
change often then it would not include an IP for that host (thus avoiding the
performance penalty). The host name could still be written without an IP into
an X-HTTP header in order to allow the browser to start the regular DNS lookup
as soon as possible.
It probably makes sense to rotate the IP order because otherwise this feature
might kill round robin DNS (if a big share of accesses comes from the same
It might be optimal to defer the DNS lookups until the downloads have begun.
Thus there is no additional latency for the downloads (which would be quite
low, though) and the DNS lookups should finish before the downloads anyway.