Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Reducing DNS latency

89 views
Skip to first unread message

Vulimiri, Ashish

unread,
Dec 5, 2014, 2:49:56 PM12/5/14
to dev-tech...@lists.mozilla.org, Godfrey, Brighten
Hi,

I’m a grad student at the U of Illinois, and I’ve been looking into a technique for improving DNS lookup latency, involving replicating DNS requests to multiple DNS servers in parallel. We’re seeing a significant reduction in latency when we try this: 25-60% better raw DNS latency and, in initial experiments, 6-15% better total browser page load times.

Raw DNS performance: sec 3.2 in http://web.engr.illinois.edu/~vulimir1/papers/13-conext.pdf
Impact on web page load times: http://arxiv.org/abs/1306.3534

Would there by any interest in incorporating something like this in the Firefox code?

Thanks,
Ashish

Daniel Stenberg

unread,
Dec 5, 2014, 3:01:12 PM12/5/14
to Vulimiri, Ashish, dev-tech...@lists.mozilla.org, Godfrey, Brighten
On Fri, 5 Dec 2014, Vulimiri, Ashish wrote:

> Would there by any interest in incorporating something like this in the
> Firefox code?

Have you given any closer thoughts on more exactly how it would or could be
done? Firefox is using the "stock" name resolving functions after all...

--

/ daniel.haxx.se

Vulimiri, Ashish

unread,
Dec 5, 2014, 4:13:26 PM12/5/14
to Daniel Stenberg, dev-tech...@lists.mozilla.org, Godfrey, Brighten
I think this would need to be done as a drop-in replacement for getaddrinfo and similar that actually goes off and sends requests to multiple DNS servers, listens for responses, and returns once it gets a first reply.

One issue is that depending on how this is implemented, it would either end up creating a separate socket on each DNS request to send requests and wait for responses, or require a separate thread that would manage all requests/responses -- both of which are their own form of overhead.

Boris Zbarsky

unread,
Dec 5, 2014, 5:30:50 PM12/5/14
to
On 12/5/14, 1:12 PM, Vulimiri, Ashish wrote:
> or require a separate thread that would manage all requests/responses -- both of which are their own form of overhead.

Note that DNS is already done on a dedicated thread in necko, because
the system APIs involved are blocking...

-Boris

Christian Biesinger

unread,
Dec 5, 2014, 5:50:03 PM12/5/14
to dev-tech...@lists.mozilla.org
On Fri Dec 05 2014 at 2:49:57 PM Vulimiri, Ashish <vuli...@illinois.edu>
wrote:

> I’m a grad student at the U of Illinois, and I’ve been looking into a
> technique for improving DNS lookup latency, involving replicating DNS
> requests to multiple DNS servers in parallel.
>

I think this is something we need to be really careful about, because this
would effectively double (or triple, etc) the load on the DNS servers we
use, I am not sure that the owners of those servers would be happy.

-christian

Patrick McManus

unread,
Dec 5, 2014, 8:40:42 PM12/5/14
to Vulimiri, Ashish, dev-tech...@lists.mozilla.org, Godfrey, Brighten
Hi Ashish,

Thanks for bringing your paper up (actually the first link is timing out
for me right now - so I have only read the second). My personal viewpoint
is that algorithms that trade bandwidth for latency have a lot of value in
short lifetime web scenarios - so I think its interesting work. I've got
some concerns, but please view them in that bigger picture light.

Quick question - you seem to be using a speculating tone in this email
thread about a drop in for getaddrinfo() but the paper indicates this
experiment was actually executed for a local Firefox build.. is this how it
was done? That seems like a reasonable approach but I want to understand if
we're speculating or talking about results.

If that's the case, I am (pleasantly?) surprised you saw such an impact in
page load metrics. I'm not especially surprised that you can do better on
any particular query, but a lot of the time our page load time isn't
actually serialized on the dns lookup latency because of the speculative
queries we do. Maybe its just a manifestation of a huge number of
sub-origins or maybe your test methodology effectively bypassed that logic
by not finding urls organically. (that would mean telemetry of average
browsing behavior would show less of an impact than the lab study).. we've
got some additional code coming soon that will link subdomains of origins
to your history so that when you revisit an origin the subdomain dns
queries will be done in parallel with the origin lookup - I would expect
that would mitigate some of the gains you see in real life as well.

There are two obvious scenarios you see improvement from - 1 is just
identifying a faster path, but the other is in having a parallel query
going when one encounters a drop and has to retry.. just a few of those
retries could seriously change your mean. Do you have data to tease these
things apart? Retries could conceivably also be addressed with aggressive
timers.

Its also concerning that it seems the sum of the data is all based on the
comparison of one particular DSL connection and one particular (un-named?)
ISP recursive resolver as the baseline. Do I have that right? How do we
tell if that's representative or anecdotal? It would be really interesting
to graph savings % against rtt to the origin.

One of my concerns is that, while I wish it weren't true, there really is
more than 1 DNS root on the Internet and the host resolver doesn't
necessarily have insight into that - coporate split horizon dns is a
definite thing. So silently adding more resolvers to that list will result
in inconsistent views.

also :biesi's concerns are fair to consider.. this is a place where mozilla
operating a distributed public service on behalf of its clients might be a
reasonable thing to consider if it showed reproducible widespread gains (a
mighty big if).. any use of third-party servers (which would include
mozilla operated services) also comes with tracking and security concerns
which might not be surmountable. All interesting stuff to consider -
certainly before any code was integrated.

Thanks
-Patrick



On Fri, Dec 5, 2014 at 11:48 AM, Vulimiri, Ashish <vuli...@illinois.edu>
wrote:

> Hi,
>
> I’m a grad student at the U of Illinois, and I’ve been looking into a
> technique for improving DNS lookup latency, involving replicating DNS
> requests to multiple DNS servers in parallel. We’re seeing a significant
> reduction in latency when we try this: 25-60% better raw DNS latency and,
> in initial experiments, 6-15% better total browser page load times.
>
> Raw DNS performance: sec 3.2 in
> http://web.engr.illinois.edu/~vulimir1/papers/13-conext.pdf
> Impact on web page load times: http://arxiv.org/abs/1306.3534
>
> Would there by any interest in incorporating something like this in the
> Firefox code?
>
> Thanks,
> Ashish
> _______________________________________________
> dev-tech-network mailing list
> dev-tech...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-network
>

Christopher Barry

unread,
Dec 6, 2014, 1:59:43 PM12/6/14
to dev-tech...@lists.mozilla.org
On Fri, 05 Dec 2014 22:49:55 +0000
Christian Biesinger <cbies...@gmail.com> wrote:

>On Fri Dec 05 2014 at 2:49:57 PM Vulimiri, Ashish
><vuli...@illinois.edu> wrote:
>
>> I’m a grad student at the U of Illinois, and I’ve been looking into a
>> technique for improving DNS lookup latency, involving replicating DNS
>> requests to multiple DNS servers in parallel.
>>
>
>I think this is something we need to be really careful about, because
>this would effectively double (or triple, etc) the load on the DNS
>servers we use, I am not sure that the owners of those servers would
>be happy.
>
>-christian

Are you intimating that Firefox has specific builtin DNS servers it
uses, independent of the host's configured resolver, or that you as a
user, say in a Corporate environment, use multiple DNS servers?

-C

Vulimiri, Ashish

unread,
Dec 6, 2014, 4:48:40 PM12/6/14
to dev-tech...@lists.mozilla.org, Godfrey, Brighten
Hi Patrick,

Thanks for looking into this.

> (actually the first link is timing out for me right now - so I have only read the second)

Sorry, looks like the university website went down. This should be a more reliable link: http://conferences.sigcomm.org/co-next/2013/program/p283.pdf

> Quick question - you seem to be using a speculating tone in this email thread about a drop in for getaddrinfo() but the paper indicates this experiment was actually executed for a local Firefox build.. is this how it was done? That seems like a reasonable approach but I want to understand if we're speculating or talking about results.

For the experiments we used a proxy DNS server: a separate process that would listen for requests on localhost:proxy_port, replicate them, and send answers back. When testing replication, we’d change the OS DNS server settings to point to the proxy, so that all DNS requests would go through the proxy. When testing the unreplicated baseline, we’d revert to the ISP default DNS settings.

We haven’t yet modified Firefox (or any other browser) to directly incorporate redundant DNS requests.

> If that's the case, I am (pleasantly?) surprised you saw such an impact in page load metrics. I'm not especially surprised that you can do better on any particular query, but a lot of the time our page load time isn't actually serialized on the dns lookup latency because of the speculative queries we do. Maybe its just a manifestation of a huge number of sub-origins or maybe your test methodology effectively bypassed that logic by not finding urls organically. (that would mean telemetry of average browsing behavior would show less of an impact than the lab study).. we've got some additional code coming soon that will link subdomains of origins to your history so that when you revisit an origin the subdomain dns queries will be done in parallel with the origin lookup - I would expect that would mitigate some of the gains you see in real life as well.

Yes, you’re right, our testing methodology was not very realistic: we simply kept repeatedly picking and loading a random website from the Alexa top-1000 list. It is possible that prefetching would perform better over a more realistic browsing session.

> There are two obvious scenarios you see improvement from - 1 is just identifying a faster path, but the other is in having a parallel query going when one encounters a drop and has to retry.. just a few of those retries could seriously change your mean. Do you have data to tease these things apart? Retries could conceivably also be addressed with aggressive timers.

I can’t speak to total page load times, but one thing I should be able to do is look at our raw DNS latency data to see how the improvement we’re seeing (in DNS lookup latency) would change if we were to ignore all failed requests (no answer before timeout). This should cut out effect #2. I’m (briefly) traveling soon and won’t have access to our archived data but I will figure this out over the next couple of days.

> Its also concerning that it seems the sum of the data is all based on the comparison of one particular DSL connection and one particular (un-named?) ISP recursive resolver as the baseline. Do I have that right? How do we tell if that's representative or anecdotal? It would be really interesting to graph savings % against rtt to the origin.

Yes, our page load time numbers are only from two sites -- Firefox on an AT&T (Illinois) DSL link vs the ISP’s DNS server, and Chrome on an academic network (U of Utah’s) -- and you’re right that a larger scale evaluation would be necessary to argue these numbers are representative. But I will note that we did look at raw DNS lookup latency a little more extensively, at 15 sites across North America. These numbers are in our other paper, the one I linked to above.

> One of my concerns is that, while I wish it weren't true, there really is more than 1 DNS root on the Internet and the host resolver doesn't necessarily have insight into that - coporate split horizon dns is a definite thing. So silently adding more resolvers to that list will result in inconsistent views.

Agreed, that is an issue. One, more limited, implementation that would still be feasible would be to see if the OS has multiple DNS servers configured, and if yes only replicate queries to those servers. Of course, this is quite different from the scenario we tested and would require careful evaluation to see if there’s any benefit.

> also :biesi's concerns are fair to consider.. this is a place where mozilla operating a distributed public service on behalf of its clients might be a reasonable thing to consider if it showed reproducible widespread gains (a mighty big if).. any use of third-party servers (which would include mozilla operated services) also comes with tracking and security concerns which might not be surmountable. All interesting stuff to consider - certainly before any code was integrated.

and

> On Dec 6, 2014, at 12:56 PM, Christopher Barry <christoph...@gmail.com> wrote:
>
> On Fri, 05 Dec 2014 22:49:55 +0000
> Christian Biesinger <cbies...@gmail.com> wrote:
>
>> I think this is something we need to be really careful about, because
>> this would effectively double (or triple, etc) the load on the DNS
>> servers we use, I am not sure that the owners of those servers would
>> be happy.
>
> Are you intimating that Firefox has specific builtin DNS servers it
> uses, independent of the host's configured resolver, or that you as a
> user, say in a Corporate environment, use multiple DNS servers?

To lay out the options here, in no particular order:

1. Mozilla operates a public-good service, a network of DNS servers that Firefox will use to reduce latency.

2. Convince a third party to lend their DNS infrastructure out. Many of the DNS servers we used for our experiments publicly advertise their DNS service -- Google public DNS, OpenDNS, Level-3 (well, I suppose L3 doesn’t quite advertise) -- and it’s conceivable some of them could be convinced to support a service like this. Although I’m aware this would be a can of worms.

3. Somehow learn a list of DNS servers the user would be willing to trust. Say by checking to see if the OS already has multiple DNS servers configured. If I’m not mistaken, Comcast configures connections with two different DNS servers; and I’ve been on corporate networks with multiple servers configured.

Advantage from #1 and #2: adding load to the DNS servers would not be a concern
Advantage from #3: no trust issues

Ashish

Brighten Godfrey

unread,
Dec 6, 2014, 6:15:41 PM12/6/14
to dev-tech...@lists.mozilla.org, Ashish Vulimiri
Hi All, just wanted to add a few things to what Ashish said. Christian Biesinger wrote:

> I think this is something we need to be really careful about, because this
> would effectively double (or triple, etc) the load on the DNS servers we
> use, I am not sure that the owners of those servers would be happy.

As Patrick noted, we are explicitly trading bandwidth for latency. This is no different than, say, DNS prefetching -- you're using extra bandwidth that might have been unnecessary but that on average saves time. At a high level, bandwidth is cheap and latency is expensive so this ends up being a good tradeoff. (In fact if you work out the numbers, it seems to be a good tradeoff even if you're paying for bandwidth on a cell connection; that's what we tried to work out in the short paper Ashish linked to.) Many of the owners of the DNS servers would be happy because it makes their sites faster.

As an aside, one advantage of having browsers do redundant DNS queries (as opposed to the OS doing it) is that the browser can decide when redundancy is worthwhile (i.e. there's a human waiting) and when it might be OK just to send a single query (e.g. DNS prefetching, when you might have a few seconds to spare).

> There are two obvious scenarios you see improvement from - 1 is just
> identifying a faster path, but the other is in having a parallel query
> going when one encounters a drop and has to retry.. just a few of those
> retries could seriously change your mean. Do you have data to tease these
> things apart? Retries could conceivably also be addressed with aggressive
> timers.

Yes, redundant queries are certainly useful to protect against both drops and slow lookups.

You could use an aggressive timer. But if the timer is set to less than the typical DNS resolution time, you might as well have just sent multiple queries to begin with. So the best you could do is set the timer equal to the resolution time, which means you're always going to be paying this extra delay if your first query ends up being slow/failed. And you're still spending some extra bandwidth because with an aggressive timer, the first query will sometimes end up succeeding just after you send the second. So aggressive timers will not get as good latency and presumably don't have as good a bandwidth/latency tradeoff; but, we have not explicitly experimented with that. Have you experimented with timer aggressiveness in Firefox? I'd be very curious to see the results.

All that said, in Ashish's experiments there's a cliff after 1 second so there are certainly timer effects. See Figure 15:
http://conferences.sigcomm.org/co-next/2013/program/p283.pdf

>> One of my concerns is that, while I wish it weren't true, there really is more than 1 DNS root on the Internet and the host resolver doesn't necessarily have insight into that - coporate split horizon dns is a definite thing. So silently adding more resolvers to that list will result in inconsistent views.
> Agreed, that is an issue. One, more limited, implementation that would still be feasible would be to see if the OS has multiple DNS servers configured, and if yes only replicate queries to those servers. Of course, this is quite different from the scenario we tested and would require careful evaluation to see if there’s any benefit.

This would also allow the user to configure more DNS servers if they choose, using the same mechanism they already use on their OS.

Both the issue of which DNS servers are acceptable to use, and the issue of measuring performance improvement in more realistic use cases, are important. Does Firefox have mechanisms for testing experimental technology like this in realistic environments?

Thanks,
~Brighten


Christopher Barry

unread,
Dec 6, 2014, 7:23:06 PM12/6/14
to dev-tech...@lists.mozilla.org
>> There are two obvious scenarios you see improvement from - 1 is just
>> identifying a faster path, but the other is in having a parallel
>> query going when one encounters a drop and has to retry.. just a few
>> of those retries could seriously change your mean. Do you have data
>> to tease these things apart? Retries could conceivably also be
>> addressed with aggressive timers.
>
>I can’t speak to total page load times, but one thing I should be able
>to do is look at our raw DNS latency data to see how the improvement
>we’re seeing (in DNS lookup latency) would change if we were to ignore
>all failed requests (no answer before timeout). This should cut out
>effect #2. I’m (briefly) traveling soon and won’t have access to our
>archived data but I will figure this out over the next couple of days.
>
>> Its also concerning that it seems the sum of the data is all based
>> on the comparison of one particular DSL connection and one
>> particular (un-named?) ISP recursive resolver as the baseline. Do I
>> have that right? How do we tell if that's representative or
>> anecdotal? It would be really interesting to graph savings % against
>> rtt to the origin.
>
>Yes, our page load time numbers are only from two sites -- Firefox on
>an AT&T (Illinois) DSL link vs the ISP’s DNS server, and Chrome on an
>academic network (U of Utah’s) -- and you’re right that a larger scale
>evaluation would be necessary to argue these numbers are
>representative. But I will note that we did look at raw DNS lookup
>latency a little more extensively, at 15 sites across North America.
>These numbers are in our other paper, the one I linked to above.
>
>> One of my concerns is that, while I wish it weren't true, there
>> really is more than 1 DNS root on the Internet and the host resolver
>> doesn't necessarily have insight into that - coporate split horizon
>> dns is a definite thing. So silently adding more resolvers to that
>> list will result in inconsistent views.
>
>Agreed, that is an issue. One, more limited, implementation that
>would still be feasible would be to see if the OS has multiple DNS
>servers configured, and if yes only replicate queries to those
>servers. Of course, this is quite different from the scenario we
>tested and would require careful evaluation to see if there’s any
>benefit.
>
>> also :biesi's concerns are fair to consider.. this is a place where
>> mozilla operating a distributed public service on behalf of its
>> clients might be a reasonable thing to consider if it showed
>> reproducible widespread gains (a mighty big if).. any use of
>> third-party servers (which would include mozilla operated services)
>> also comes with tracking and security concerns which might not be
>> surmountable. All interesting stuff to consider - certainly before
>> any code was integrated.
>
>and
>
>> On Dec 6, 2014, at 12:56 PM, Christopher Barry
>> <christoph...@gmail.com> wrote:
>>
>> On Fri, 05 Dec 2014 22:49:55 +0000
>> Christian Biesinger <cbies...@gmail.com> wrote:
>>
>>> I think this is something we need to be really careful about,
>>> because this would effectively double (or triple, etc) the load on
>>> the DNS servers we use, I am not sure that the owners of those
>>> servers would be happy.
>>
>> Are you intimating that Firefox has specific builtin DNS servers it
>> uses, independent of the host's configured resolver, or that you as a
>> user, say in a Corporate environment, use multiple DNS servers?
>
>To lay out the options here, in no particular order:
>
>1. Mozilla operates a public-good service, a network of DNS servers
>that Firefox will use to reduce latency.
>
>2. Convince a third party to lend their DNS infrastructure out. Many
>of the DNS servers we used for our experiments publicly advertise
>their DNS service -- Google public DNS, OpenDNS, Level-3 (well, I
>suppose L3 doesn’t quite advertise) -- and it’s conceivable some of
>them could be convinced to support a service like this. Although I’m
>aware this would be a can of worms.
>
>3. Somehow learn a list of DNS servers the user would be willing to
>trust. Say by checking to see if the OS already has multiple DNS
>servers configured. If I’m not mistaken, Comcast configures
>connections with two different DNS servers; and I’ve been on corporate
>networks with multiple servers configured.
>
>Advantage from #1 and #2: adding load to the DNS servers would not be
>a concern Advantage from #3: no trust issues
>
>Ashish

My strong opinion, and indeed it is the understood expectation of anyone
using any application that requires name resolution, is that all
applications always strictly obey the local resolver configuration of
the host running the application. Period. At no time should any
application bypass the local resolver configuration and use name
servers not explicitly specified by the user - for any reason,
regardless of possible performance benefit. If this is what FF is doing
now, I am extremely disappointed in that decision. That behavior
transcends bad design and approaches malware level.

I can understand it if you included a list of DNS servers *you* trust
for convenience, and distribute that as a text file with the app, but
modifying the system's DNS server list (e.g. adding the servers you
might recommend to the system without specific instructions to do so,
or using them directly from the application) is strictly a root or
administrator-only permissions level decision - never an application's.

This behavior should reside at the resolver and/or dhcp server
level, not in any application. Put your idea into a new kind of
resolver daemon that can select from configured dhcp or statically
configured name servers, and let people can run that if they so choose.
This would benefit all name resolution on the system, not just from
within a specific application.

-C

Patrick McManus

unread,
Dec 7, 2014, 9:39:39 AM12/7/14
to Christopher Barry, dev-tech-network
On Sat, Dec 6, 2014 at 7:21 PM, Christopher Barry <
christoph...@gmail.com> wrote:

>
> My strong opinion, and indeed it is the understood expectation of anyone
> using any application that requires name resolution, is that all
> applications always strictly obey the local resolver configuration of
> the host running the application. Period.


I'm going to push back against this notion that operating system services
must always take priority.

For instance, windows provides a trust root list that firefox ignores in
favor of its own. That's a design choice.

There are several reasons we might do things like that - performance,
security, and the ability to effect legacy configurations for example.
There are also costs in terms of administrative awkwardness, surprises, and
incompatibilities. Its not to be undertaken lightly.

It would be wrong to interpret this mail as supporting the algorithm being
discussed in this thread (I'm basically open minded on the topic), I'm just
saying its plausible to discuss.

The much larger problem, to me, is that use of a public dns adds another
party to your transaction: {client, origin, isp, public-dns} .. its
conceivable such an algorithm would boost performance using only multiple
isp servers, but there is no evidence to show that at this point and
honestly thin evidence overall. So its the kind of thing that bears more
investigation.

regardless of possible performance benefit. If this is what FF is doing
> now,
>

Just to be clear - this thread is discussing the results of a small
academic experiment not of general Firefox behavior. I appreciate the
authors bringing it here to discuss - let's keep it a welcoming environment
for exploration.

Thanks
-Patrick (wearing module owner hat).

Patrick McManus

unread,
Dec 7, 2014, 9:49:39 AM12/7/14
to Brighten Godfrey, dev-tech...@lists.mozilla.org, Ashish Vulimiri
On Sat, Dec 6, 2014 at 5:58 PM, Brighten Godfrey <p...@illinois.edu> wrote:

>
> You could use an aggressive timer. But if the timer is set to less than
> the typical DNS resolution time, you might as well have just sent multiple
> queries to begin with.


a couple thoughts there..
1 - timeout driven multiples might have similar bandwidth costs, but if
they used a consistent resolver then the privacy problem is avoided
2 - if the issue is that the OS retry is too slow for drop handling
(typically measured in 1000s of ms), bringing that timer down to the 90th
percentile of successful lookups (in the 100s) could have significant
impact at marginal cost.

Again, this is only plausible if the source of the gains you saw was due to
having redundancy in the face of loss. So that's worth figuring out.


Both the issue of which DNS servers are acceptable to use, and the issue of
> measuring performance improvement in more realistic use cases, are
> important. Does Firefox have mechanisms for testing experimental
> technology like this in realistic environments?
>
>
addons are good approaches here. We can promote them on places like mozilla
hacks, and the mozilla telemetry system can be used for reporting certain
kinds of anonymous results. Its well suited for reporting timings under
different conditions.

Thanks for your work.

Christopher Barry

unread,
Dec 7, 2014, 5:01:00 PM12/7/14
to dev-tech-network
On Sun, 7 Dec 2014 09:38:36 -0500
Patrick McManus <mcm...@ducksong.com> wrote:

>On Sat, Dec 6, 2014 at 7:21 PM, Christopher Barry <
>christoph...@gmail.com> wrote:
>
>>
>> My strong opinion, and indeed it is the understood expectation of
>> anyone using any application that requires name resolution, is that
>> all applications always strictly obey the local resolver
>> configuration of the host running the application. Period.
>

Hi Patrick,

I understand the spirit of encouraging investigation and development,
that's a great thing and I couldn't agree more.

I'm personally skeptical that this redundant name resolution scheme
wouldn't just adversely impact name servers and make a lot of noise for
relatively imperceptible gain if any, especially if deployed at scale.
In my experience, it's not name resolution that's the bottleneck in my
web usage. It's typically waiting for ad servers to deliver their
drivel.

And, while I'm also concerned about the potential of user privacy
invasion and tracking capabilities that appear, to me at least, to
possibly be a subliminal motive for this research, that's just my
personal gut reaction and my *opinion*, and not at all at the heart of
my objection to this discussion here.

please see comments inline...

>
>I'm going to push back against this notion that operating system
>services must always take priority.

well, it's not really a notion. it's established secure practice
that's in place for a reason. you're going to say that an application
should take *priority*, essentially usurp control over system services
for it's own particular reasons? possibly behind the user's back where
private data will be shared with unknown third parties? huh? doesn't
that basically describe how malware operates? i'm reasonably sure you
don't really feel that way.

>
>For instance, windows provides a trust root list that firefox ignores
>in favor of its own. That's a design choice.

but, the user in this case is free to control this subsystem, no?

>
>There are several reasons we might do things like that - performance,
>security, and the ability to effect legacy configurations for example.
>There are also costs in terms of administrative awkwardness,
>surprises, and incompatibilities. Its not to be undertaken lightly.

increasing performance and security are correct goals within the scope
of the application, absolutely, and reasonable defaults are always a
good idea too, but i will posit that it's not the domain nor purview of
applications to worry about, nor override system functions. If one
has an improvement to the behavior of a system function, they should
put their energy into helping improve that system function, but it
should not be a part of an application where it's use essentially
performs an end-run around the existing system function.
simply put, that just ain't cool.

>
>It would be wrong to interpret this mail as supporting the algorithm
>being discussed in this thread (I'm basically open minded on the
>topic), I'm just saying its plausible to discuss.

well that's encouraging. I consider myself open minded as well, but
in this case the discussion should be done in the appropriate forum(s).
while the work is interesting, and likely requires much more
investigation, the suggested behavior is not germane to this forum.

>
>The much larger problem, to me, is that use of a public dns adds
>another party to your transaction: {client, origin, isp,
>public-dns} .. its conceivable such an algorithm would boost

bingo. and this is why this is a very bad idea unless the
administrator-user has complete control over the name servers being used
if and when this idea is deployed as a 'system level daemon'.

>performance using only multiple isp servers, but there is no evidence
>to show that at this point and honestly thin evidence overall. So its
>the kind of thing that bears more investigation.

agreed. yet for me, and granted maybe i'm missing something important
here, but intuitively, it's not clear how this methodology at scale
would not eventually settle into similar (or worse) latencies over time,
while at the cost of a lot of resource-consuming noise and churning that
would, in my view, ultimately impact the latencies of literally all
other traffic. but yeah, it may merit more investigation, but probably
not here.

>
>regardless of possible performance benefit. If this is what FF is doing
>> now,
>>
>
>Just to be clear - this thread is discussing the results of a small
>academic experiment not of general Firefox behavior. I appreciate the
>authors bringing it here to discuss - let's keep it a welcoming
>environment for exploration.

yes, and discussion of it with the idea of possibly helping the authors
find a more appropriate forum in which to investigate it's possible
merits should definitely be undertaken. i'm certainly not saying that
they should not investigate the potential usefulness of their work, nor
that if it does indeed prove to be an effective idea, that Firefox (and
any other application) would not benefit from that.

i'm simply saying that even considering embedding this technology into
any application, not just Firefox, is absolutely not the correct
approach, and that i would hope members of this forum, knowing that as
well, would help to steer the authors in the right direction.

for instance, the bind-* or dhcp-* lists may be a good place for these
folks to initially discuss their project ideas, and folks there may have
additional ideas about other appropriate lists to address, and where and
how this technology can be best investigated and evaluated for use in
other client operating systems.

The bind/dhcp list server:
https://lists.isc.org/mailman/listinfo


--
Regards,
Christopher Barry

Random geeky fortune:
Round Numbers are always false.
-- Samuel Johnson

Vulimiri, Ashish

unread,
Dec 12, 2014, 3:55:03 PM12/12/14
to dev-tech...@lists.mozilla.org, Godfrey, Brighten
Apologies for the slow response, I was traveling.

Suppose (1) we were going to replicate exclusively to whichever DNS servers the OS has configured, so no trust issues would be raised; and (2) we had enough large-scale/realistic experimental data to prove the technique does significantly reduce page load time when used this way. In your opinion, what would the most appropriate place be for something like this to be deployed?

1. In the browser.

2. As a a separate piece of software users would need to install.

3. As part of bind or another DNS resolver.

4. In the OS.

Christopher Barry

unread,
Dec 12, 2014, 5:35:06 PM12/12/14
to dev-tech...@lists.mozilla.org
On Fri, 12 Dec 2014 20:53:13 +0000
"Vulimiri, Ashish" <vuli...@illinois.edu> wrote:

>Apologies for the slow response, I was traveling.
>
>Suppose (1) we were going to replicate exclusively to whichever DNS
>servers the OS has configured, so no trust issues would be raised; and
>(2) we had enough large-scale/realistic experimental data to prove the
>technique does significantly reduce page load time when used this
>way. In your opinion, what would the most appropriate place be for
>something like this to be deployed?
>
>1. In the browser.
>
>2. As a a separate piece of software users would need to install.
>
>3. As part of bind or another DNS resolver.
>
>4. In the OS.
>

to answer your questions:
1) never.
2-4) possibly, with their buy-in and approval, and global administrative
control


I think you mean 'query', not 'replicate' above... correct?

I suspect your blast technique will show some promise at low levels of
deployment. This seems somewhat obvious. Where the algorithm falls down
is at scales you would not reasonably be able to test at. My suspicion
is that if you a) presented this idea to the glib or bind folks, or b)
created an RFC outlining your proposal you would better see where this
idea stands.


Speaking strictly about *NIX here.

From 'man resolv.conf':

"
nameserver Name-server-IP-address

Internet address of a name server that the resolver should query,
either an IPv4 address (in dot notation), or an IPv6 address in colon
(and possibly dot) notation as per RFC 2373. Up to MAXNS
(currently 3, see <resolv.h>) name servers may be listed, one per
keyword. If there are multiple servers, the resolver library queries
them in the order listed. If no nameserver entries are
present, the default is to use the name server on the local machine.
(The algorithm used is to try a name server, and if the query times
out, try the next, until out of name servers, then repeat trying all
the name servers until a maximum number of retries are made.)
"

Note that this is the expected behavior of how the glib resolver
handles the configured nameservers. It is not uncommon to configure the
first nameserver as the primary, the second as a secondary (often both
in-house enterprise servers, or redundant ISP servers), with the third
being a public dns only to be used when absolutely necessary e.g. if
both preferred nameservers are down for some reason. This functionality
is desired for many reason already stated below.

Note also that the resolver allows a round-robin query methodology
using the 'rotate' option, which essentially spreads the load around
all configured name servers if that functionality is desired.

Note also, that the resolver only queries the *local* nameserver if
resolv.conf is not present. Again, this is often desired, and would be
something the admin would decide.

-C
Q: What's the difference between a duck and an elephant?
A: You can't get down off an elephant.

Patrick McManus

unread,
Dec 13, 2014, 10:31:14 PM12/13/14
to Vulimiri, Ashish, dev-tech...@lists.mozilla.org, Godfrey, Brighten
I think you're basically saying that if you had an enhancement that yielded
a 10% improvement in overall pageload times, scaled well, and didn't have a
privacy or security concern would we consider deploying it in the browser?

If that was the most practical way to reach our users, absolutely.

But you've set a pretty high bar and I'm skeptical that this can yield
those kind of broad based gains. Prove me wrong and we'll all be better
for it. :)

-P



On Fri, Dec 12, 2014 at 3:53 PM, Vulimiri, Ashish <vuli...@illinois.edu>
wrote:
>
> Apologies for the slow response, I was traveling.
>
> Suppose (1) we were going to replicate exclusively to whichever DNS
> servers the OS has configured, so no trust issues would be raised; and (2)
> we had enough large-scale/realistic experimental data to prove the
> technique does significantly reduce page load time when used this way. In
> your opinion, what would the most appropriate place be for something like
> this to be deployed?
>
> 1. In the browser.
>
> 2. As a a separate piece of software users would need to install.
>
> 3. As part of bind or another DNS resolver.
>
> 4. In the OS.
>
0 new messages