Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ntp DNS round robin experiment

966 views
Skip to first unread message

Adrian 'Dagurashibanipal' von Bidder

unread,
Jan 27, 2003, 2:55:30 PM1/27/03
to
Hello!

Yes, this is of course inspired by the 'Public servers abuse' and the
'Fixing...' thread. I thought instead of talking, I'd just try if this is
going to work: a DNS round robin for time servers.

The idea: have two DNS names and invite everyone to use a configfile with
just:

---
driftfile ...
server left.time.fortytwo.ch
server right.time.fortytwo.ch
---

This takes care of the client side - if this is default configuration,
people don't have to hunt for public ntp servers, and so probably won't
even go looking for the famous 'list of public time servers'.

On the server side: Accuracy is not the problem - so, I'd not hesitate to
add stratum-4 or even higher timeservers to the list. Having the offset
stay below 50ms should suffice for everyday use.

So how do I get the timeservers? I think announcing this project on the
debia...@lists.debian.org list, and mentioning that they only need to
publicise services they already have running, without any reconfiguration,
will reveal quite a few servers. Anybody knows similar mailing lists that
could raise a few servers?

Stage 2 of the plan (once there's a big enough number of time servers)
could then be to encourage vendors to include
(left|right).time.fortytwo.ch in their default configuration, with a plug
that they should join the project if they have a static IP address.

Stage 3 (and now I'm dreaming) would then be to try to optimize the ntp
relationships according to traceroute distance etc, to establish something
like a global internet time network (meaning basically that every ntp
server potentially syncs with every other - I use gpg, there's the notion
of the 'strong set' of the keyring...)

So far:
- the time.fortytwo.ch zone will soon be officially created
- there's a webpage at http://fortytwo.ch/time
- and mailinglists timekeepers[-announce]@fortytwo.ch

Needed:
- ntp servers willing to join the experiment
- a few secondary nameservers for the zone to spread that part of the
load.

Problems:
- weeding out abandoned servers. I rely on users complaining, but I'll
run the occasional 'while read i; do ntpq $i; done', too.
- It's inelegant. Well, yes. But it should work and we can do it right
now.
- it doesn't solve the problem that people are not aware that they can
have something better than adjusting their pc's clock every few months.
- centralized: somebody/some group has to do DNS maintenance

I'm willing to do DNS maintenance for now - I don't expect thousands of
servers anytime soon, so it should stay pretty manageable.

Comments?
-- vbi

Adrian 'Dagurashibanipal' von Bidder

unread,
Jan 28, 2003, 3:55:23 AM1/28/03
to
Behold! For Adrian 'Dagurashibanipal' von Bidder declaimed:

> - there's a webpage at http://fortytwo.ch/time
> - and mailinglists timekeepers[-announce]@fortytwo.ch

For those trying: the mailing lists have been only just created. No mail
so far, that's why the archives link is not working yet.

I think I shall be officially announcing project launch when there's
something like 10 timeservers.

So - even if you're just running stratum-5 syncing with your ISPs next hop
router, no problem, have your timeserver added to the list. The project is
about getting many servers to spread the load, accuracy etc. comes later.

cheers
-- vbi

--
To add insult to injury.
-- Phaedrus

Tapio Sokura

unread,
Jan 28, 2003, 4:59:36 AM1/28/03
to
Adrian 'Dagurashibanipal' von Bidder <middle...@fortytwo.ch> wrote:
> server left.time.fortytwo.ch
> server right.time.fortytwo.ch

How about adding also continent designators to the names. Like left.eu.time
and right.as.time, so those who know which continent they are on, can
preselect a closer group of servers. Maybe even country-specific groups.
This of course requires some activity from the person setting up the
machine, so the default of non-geographical server group would be necessary
anyway.

On a sidenote, would a default server group selection based on the time
zone selected at installation be a good idea? For example, if I chose
"Europe/Helsinki" as my time zone during OS installation, it's probable that
I'm also physically close to Finland and thus default servers of
left/right.fin.time might be good.. maybe something for the OS people to
consider if a common ntp dns takes off.

Anyway if continent/country specific dns-names are taken into use, then they
should all exist from the day they are introduced, even if there are no
servers on a certain country or continent. This would ensure that
configurations using these names are valid, even though the servers might
not be in that particular country or continent (but as close as possible
anyway).


All of this of course adds administrative overhead, which in my opinion is
important to consider so that continuity is ensured (= requires as little
effort as possible for good results). Maybe it's best to start with just the
global (maybe also continent) dns names.

> Problems:
> - weeding out abandoned servers. I rely on users complaining, but I'll
> run the occasional 'while read i; do ntpq $i; done', too.

I think having some kind of regular (semi)automatic monitoring of the
servers included in the DNS is necessary. I don't mean that there would have
to be close history-aware monitoring of dispersion or something like that,
simple availability and offset (like within a couple of hundred ms) checks
run maybe once a day would be a lot better than nothing regular at all.

Having unreachable or heavily falseticking servers in the DNS for a long
time diminishes the value of the list. Adding a "center" DNS name would
provide some redundancy in case of heavily falseticking servers.

Adrian 'Dagurashibanipal' von Bidder

unread,
Jan 28, 2003, 8:32:40 AM1/28/03
to
[cc:ing to timekeepers mailing list]
Behold! For Tapio Sokura declaimed:

> Adrian 'Dagurashibanipal' von Bidder <middle...@fortytwo.ch> wrote:
>> server left.time.fortytwo.ch
>> server right.time.fortytwo.ch
>
> How about adding also continent designators to the names. Like left.eu.time
> and right.as.time, so those who know which continent they are on, can
> preselect a closer group of servers. Maybe even country-specific groups.
> This of course requires some activity from the person setting up the
> machine, so the default of non-geographical server group would be necessary
> anyway.

I thought about it. I think it's a good idea, but only if I can get 10 or
more servers for each domain name - which is not the case right now, with
the project just started (but still, 6 servers already have joined. Thanks
to those people!).



> Anyway if continent/country specific dns-names are taken into use, then they
> should all exist from the day they are introduced, even if there are no
> servers on a certain country or continent. This would ensure that
> configurations using these names are valid, even though the servers might
> not be in that particular country or continent (but as close as possible
> anyway).

Hmm. I think using the generic names now and introducing the
continent/countrycode names later is good enough. But I agree that *if*
geographic names are introduced, the whole world should be covered from
day 1, so that automatic setup tools are easy to write.

> All of this of course adds administrative overhead, which in my opinion is
> important to consider so that continuity is ensured (= requires as little
> effort as possible for good results). Maybe it's best to start with just the
> global (maybe also continent) dns names.

As I said: I'll just have the global names right now. If this project
really takes off, I'll be happy to invest some time to write management
software, and I'll be happy to have a co-sysadmin, too. But that's not now.

>> Problems:
>> - weeding out abandoned servers. I rely on users complaining, but I'll
>> run the occasional 'while read i; do ntpq $i; done', too.
>
> I think having some kind of regular (semi)automatic monitoring of the
> servers included in the DNS is necessary. I don't mean that there would have
> to be close history-aware monitoring of dispersion or something like that,
> simple availability and offset (like within a couple of hundred ms) checks
> run maybe once a day would be a lot better than nothing regular at all.

wanted: a script. Yes, I can write one, it's just a question of when...

I'd say weekly, not daily - ntp is a slow world, somehow. No problem
running this on my server.

> Having unreachable or heavily falseticking servers in the DNS for a long
> time diminishes the value of the list. Adding a "center" DNS name would
> provide some redundancy in case of heavily falseticking servers.

Hmmm. Thinking about this, I have now changed the nameserver setup to

center IN CNAME time.fortytwo.ch.
left IN CNAME time.fortytwo.ch.
right IN CNAME time.fortytwo.ch.

with all servers being time.fortytwo.ch. The downside is, of course, that
there's some likelyhood of picking the same server twice, but the upside
is that the arbitrary division in 'left servers' and 'right servers'
disappears. And: ntpd works just fine with

...
server time.ethz.ch
server time.ethz.ch
...

in its config file. Any sane resolver library will never assign the same
server to subsequent calls of gethostbyname. So the left/center/right
names are now only for those clients who do not allow the same name being
entered twice.

(speaking of name servers: I'd be happy if there could be some backup
nameservers - is anybody running a nameserver anyway?)

So long
-- vbi

--
preciousssssssssssssssssssssssssssssssssssssssssssssssssssss

Eric

unread,
Jan 28, 2003, 10:24:37 AM1/28/03
to
This was discussed before in the NG, about six or nine months ago
IIRC.

For a world-wide, standard NTP server-pool based on DNS, I think that
getting the naming right, from the start, is critical.

Your experiment is a good one to try, but if it works, and gets
support, might I suggest that the domains might better exist as a
subdomain of ntp.org? If not under ntp.org, than a new domain like
wwntp.org, .net, or something.

In the absence of balancing by network distance, I think that
continent specific zones are also a good idea. So, perhaps us.ntp.org
for a pool of S3 or betters NTP servers for use in the US, eu.ntp.org
in the EU, etc.

Or us.pool.ntp.org, to create a subdomain (pool.) that can be hosted
more widely than what the main ntp.org domain needs to be.

I chose S3 because I see the S2's getting abused, which was the start
of this thread. I think that eventually, the "public" S1s and S2s
will not be so public, and be reserved for use by their owners and
certain outside S2s and S3s. Then the "public S2 list" turns into the
"S2 list for those providing public S3s", and the simpler S3 names can
be the ones widely publicized, or imbedded in firmware.

This does seem to work best with clients that are able to use DNS for
host resolution, and re-resolve it if/when their NTP server goes away.
This is not typical behavior from the NTP port in Cisco routers (they
resolve the symbolic names at config time, and only save the IPAs), or
even V3 or V4 NTPD for that matter (it only resolves hostnames through
DNS at startup).

- Eric

Adrian 'Dagurashibanipal' von Bidder

unread,
Jan 28, 2003, 11:34:24 AM1/28/03
to
Behold! For Eric declaimed:

> This was discussed before in the NG, about six or nine months ago
> IIRC.
>
> For a world-wide, standard NTP server-pool based on DNS, I think that
> getting the naming right, from the start, is critical.
>
> Your experiment is a good one to try, but if it works, and gets
> support, might I suggest that the domains might better exist as a
> subdomain of ntp.org? If not under ntp.org, than a new domain like
> wwntp.org, .net, or something.

Hmmm. Why not - pool.ntp.org sounds fine to me.

> In the absence of balancing by network distance, I think that
> continent specific zones are also a good idea. So, perhaps us.ntp.org
> for a pool of S3 or betters NTP servers for use in the US, eu.ntp.org
> in the EU, etc.

Tapio Sokura commented on this, too, and I commented on his posting - in
essence my opinion is to provide these as soon as there are enough (50?)
nameservers in the pool.

What zones would you propose? I'd think country zones would not get big
enough for a long time, so continental zones would be ok:
europe
north-america
south-america
africa
asia
australia

Hmm. Three or four letters abbreviations would probably look nicer?

>
> Or us.pool.ntp.org, to create a subdomain (pool.) that can be hosted
> more widely than what the main ntp.org domain needs to be.
>
> I chose S3 because I see the S2's getting abused, which was the start
> of this thread. I think that eventually, the "public" S1s and S2s

Because of the random nature of round-robin DNS, I'm not at all concerned
about what stratum the timeservers have - some might be S2, but others
might be S5 - it's all the same. If people notice that certain servers are
bad (be they S1 or S5), out they go.

> will not be so public, and be reserved for use by their owners and
> certain outside S2s and S3s. Then the "public S2 list" turns into the
> "S2 list for those providing public S3s", and the simpler S3 names can
> be the ones widely publicized, or imbedded in firmware.
>
> This does seem to work best with clients that are able to use DNS for
> host resolution, and re-resolve it if/when their NTP server goes away.
> This is not typical behavior from the NTP port in Cisco routers (they
> resolve the symbolic names at config time, and only save the IPAs), or
> even V3 or V4 NTPD for that matter (it only resolves hostnames through
> DNS at startup).

Implementing explicit round robin support of this type should perhaps be
looked at if this experiment proves that it's going to work. But in the
meantime, I think, it should work well enough just as it is.

Ok, folks: I'd really like to continue with this for some time. If the
folks at ntp.org think that it would better be pool.ntp.org, I'm happy to
rename the project - it's just one line change in the DNS configuration,
after all. I'll yell for help when the load (workload or server load -
probably the first) becomes too big.

A possible plan would then be
- change DNS to pool.ntp.org and notify all who have offered a server
- continue to collect servers
- make a link on the 'public time servers' page
- when enough servers are available, start providing per-continent zones.
I think there should be at least 10-20 servers for each continent. The
default (global) zone would stay for out-of-the-box configurations for
people who don't care.
- when enough servers are available, start shipping this as default
configuration of ntp clients, whith encouraging comments that those who
have static IPs should have their server added to the pool.

cheers
-- vbi

--
get my gpg key here: http://fortytwo.ch/gpg/92082481

Eric

unread,
Jan 28, 2003, 1:16:34 PM1/28/03
to
On Tue, 28 Jan 2003 17:34:24 +0100, "Adrian 'Dagurashibanipal' von
Bidder" <middle...@fortytwo.ch> wrote for the entire planet to see:

>Behold! For Eric declaimed:
>
>> This was discussed before in the NG, about six or nine months ago
>> IIRC.
>>
>> For a world-wide, standard NTP server-pool based on DNS, I think that
>> getting the naming right, from the start, is critical.
>>
>> Your experiment is a good one to try, but if it works, and gets
>> support, might I suggest that the domains might better exist as a
>> subdomain of ntp.org? If not under ntp.org, than a new domain like
>> wwntp.org, .net, or something.
>
>Hmmm. Why not - pool.ntp.org sounds fine to me.

The nice folks who own the ntp.org domain would have to sign on to
this idea. I can think of lots of reasons why they might want to.

>> In the absence of balancing by network distance, I think that
>> continent specific zones are also a good idea. So, perhaps us.ntp.org
>> for a pool of S3 or betters NTP servers for use in the US, eu.ntp.org
>> in the EU, etc.
>
>Tapio Sokura commented on this, too, and I commented on his posting - in
>essence my opinion is to provide these as soon as there are enough (50?)
>nameservers in the pool.
>
>What zones would you propose? I'd think country zones would not get big
>enough for a long time, so continental zones would be ok:
> europe

eu

> north-america
na

> south-america
sa

> africa
af

> asia
as

> australia
au, or oz

>
>Hmm. Three or four letters abbreviations would probably look nicer?

two letters are good, too.

>>
>> Or us.pool.ntp.org, to create a subdomain (pool.) that can be hosted
>> more widely than what the main ntp.org domain needs to be.

na.pool.ntp.org for example.

- Eric

Adrian 'Dagurashibanipal' von Bidder

unread,
Jan 28, 2003, 4:50:26 PM1/28/03
to
Behold! For Eric declaimed:

>>What zones would you propose? I'd think country zones would not get big
>>enough for a long time, so continental zones would be ok:
>> europe
> eu
>
>> north-america
> na
>
>> south-america
> sa
>
>> africa
> af
>
>> asia
> as
>
>> australia
> au, or oz
>
>>
>>Hmm. Three or four letters abbreviations would probably look nicer?
>
> two letters are good, too.

two letters are too similar to country codes.

good night
-- vbi

--
featured product: the GNOME desktop - http://gnome.org

Simon Lyall

unread,
Jan 28, 2003, 5:53:16 PM1/28/03
to
Eric <eje...@spamcop.net> wrote:
> For a world-wide, standard NTP server-pool based on DNS, I think that
> getting the naming right, from the start, is critical.

I'm doing a writeup for something that would support this, it will just
take me a few days since I have to do a couple of tests etc to make sure
it would scale.

Basicly it would involve playing with the DNS so that a client would get
the closest hosts. This could allow client software to use the dns names
automaticly. The scheme I am thinking off would be (example names)

ntp.ntp.org CNAME for ntp1.ntp.org closest "open server"
ntp1.ntp.org closest "open server"
ntp2.ntp.org 2nd closest "open server"
ntp3.ntp.org 3rd closest "open server" that is run by a seperate
organisation to ntp1 and ntp2
ntp4.ntp.org 4rd closest "open server" that is run by a seperate
organisation to ntp1, ntp2 and ntp3


w.r.t abusinve clients (those that are hitting 50 times a second etc) I
was wondering if these tend to stay on the name ip for a long period. If
so we could include a "blackhole" mechanism (ie ntp1.ntp.org is mapped to
127.0.0.1).

The above is just intended for end users, for stratum 2 and above servers
I would guess something more formal could be setup.

--
Simon Lyall. | Newsmaster | Work: simon...@ihug.co.nz
Senior Network/System Admin | Postmaster | Home: si...@darkmere.gen.nz
ihug, Auckland, NZ | Asst Doorman | Web: http://www.darkmere.gen.nz

Eric

unread,
Jan 28, 2003, 9:16:59 PM1/28/03
to
On Tue, 28 Jan 2003 22:53:16 +0000 (UTC), Simon Lyall
<simon...@ihug.invalid> wrote:

>Eric <eje...@spamcop.net> wrote:
>> For a world-wide, standard NTP server-pool based on DNS, I think that
>> getting the naming right, from the start, is critical.
>
>I'm doing a writeup for something that would support this, it will just
>take me a few days since I have to do a couple of tests etc to make sure
>it would scale.
>
>Basicly it would involve playing with the DNS so that a client would get
>the closest hosts. This could allow client software to use the dns names
>automaticly.

This would be a great improvement / alternative to the more granular
regional or world-wide server lists suggested in the original post. I
don't know how to tailor DNS by network distance. Is this a feature
of current DNS implementations?

>The scheme I am thinking off would be (example names)
>
>ntp.ntp.org CNAME for ntp1.ntp.org closest "open server"
>ntp1.ntp.org closest "open server"
>ntp2.ntp.org 2nd closest "open server"
>ntp3.ntp.org 3rd closest "open server" that is run by a seperate
> organisation to ntp1 and ntp2
>ntp4.ntp.org 4rd closest "open server" that is run by a seperate
> organisation to ntp1, ntp2 and ntp3

How well does it scale?

>w.r.t abusinve clients (those that are hitting 50 times a second etc) I
>was wondering if these tend to stay on the name ip for a long period. If
>so we could include a "blackhole" mechanism (ie ntp1.ntp.org is mapped to
>127.0.0.1).

Abusive clients are a problem now, and will be a bigger one in the
future. 50 a second is very high. I consider abusive those clients
that poll every minute, usually from behind firewalls while using a
new source port and that creates separate associations each time. But
that's me. I feel 60 second polls from dumb clients should be limited
to a private network. The full NTPD adjusts towards 17
minutes/poll/server, and clients not running NTP don't keep as good
local time anyway, so they shouldn't ask as much of outside servers,
which are a shared, donated resource.

Some code improvements in the standard NTPD, or just changes in
certain default values, as discussed elsewhere, would help. Having
compatible clients that stick to a reasonable discipline would help
even more. How to discover and limit the bad clients across a
distributed NTP herd might be a difficult problem.

The up side is that just by having a standard naming convention, it
allows for the burden to be distributed much better, and for the
anticipated growth to be managed, as originally conceived.

- Eric

David L. Mills

unread,
Jan 28, 2003, 11:37:17 PM1/28/03
to
Adrian,

We would be delighted to host a subdomain of ntp.org. I'm kinda dumb
with the details, but maybe a volunteer can be found to help.

The scheme might actually work better than you think. The manycast
scheme can probably be subverted so that finding and configuring the
best three servers, say out of six, would be completely automatic.

Dave

Nelson Minar

unread,
Jan 29, 2003, 5:15:14 PM1/29/03
to
Congratulations on building a specific solution to the server abuse
problem! I think it's great to try.

Two things that complicate it:

Serving the DNS itself might be expensive

Unless your list of NTP hosts contains several thousand servers, I
don't really see how this helps.

The neat thing about P2P approaches is that your client gets added to
the pool of round robin servers. I think you could do this on the DNS
server end; everytime a new IP address resolves pool.ntp.org (or
whatever), store its IP address somewhere. Wait a day, then run some
NTP queries on that new IP to see if it's a reasonable clock. Is it
online? Is the time roughly accurate? If so, automatically add it to
the list of servers in the round robin.

If you do this you quickly build a pool where many of the NTP clients
are also NTP servers, spreading the load. NTP's self healing
characteristics mean this might work OK. I think the main problem
you'd run into is the limit to 16 strata; you may want the DNS server
to favour sending out low stratum addresses.

Basically, your DNS server is serving the function of a KaZaA
supernode. Again, its load might become a problem.

--
nel...@monkey.org
. . . . . . . . http://www.media.mit.edu/~nelson/

Hans Jørgen Jakobsen

unread,
Jan 29, 2003, 5:58:56 PM1/29/03
to
On Wed, 29 Jan 2003 22:15:14 GMT, Nelson Minar wrote:
> Congratulations on building a specific solution to the server abuse
> problem! I think it's great to try.
>
> Two things that complicate it:
>
> Serving the DNS itself might be expensive
>
That might depend on how long the ttl will be on the results.

I could imagine a model where the ip are mapped to the AS
(autonome system number)
AS is a good indication of the regions there are in the net.
The BGP metric could be used to find the nearst AS.
/hjj

Tiaan van Aardt

unread,
Jan 29, 2003, 6:16:18 PM1/29/03
to
Hi,

> >> africa
> > af

There is always a problem with the African continent. Very few countries in
Africa have Internet infrastructure, especially in sub-saharan Africa, where the
biggest internet community exists in South Africa. Most of the countries have
their own international links and there is, as yet, no continental peering.

Often it would be best for each country to use a US or EU server, since
accessing an NTP server in a neighboring country could mean four trans-atlantic
hops.

Sub-Saharan Africa has only four Stratum-1 time servers listed at
http://www.time.za.net. I'd appreciate any correction on this statement.

Kind regards,
-Tiaan.

_____________________________________________________
TruTeq Wireless (Pty) Ltd. Tel +27 12 667 1530
http://www.truteq.co.za
Wireless communications for remote machine management

Please remove 'purgethis' when replying to me directly.


David L. Mills

unread,
Jan 29, 2003, 9:21:48 PM1/29/03
to
Nelson,

We run a number of servers here that do NOT want to be revealed to the
panting pantitude. Those servers will not join unless their addresses
can be protected. Also, there is a more basic problem that explicit
provision is in the current database so operators can specify the rules
of engagement. You couldn't put those in the DNS washing machine unless
the laundry would come clean (did I say that?). Now, most of the really
useful rascals will take pains not to be in that database.

Having said this, the NTP washing machine might prosper just fine
without those stalwart warts (sorry, brain glitch), especially as an
experiment. However, eventually some way must be found to integrate the
fine points of engagement rules and I don't accept a strict rule that
you gotta pay the peer to play the peer, which seems an axiom of the
peer-peer clique. Some peers are more equal than others.

Dave

Adrian 'Dagurashibanipal' von Bidder

unread,
Jan 30, 2003, 2:54:17 AM1/30/03
to
Behold! For David L. Mills declaimed:

> Adrian,
>
> We would be delighted to host a subdomain of ntp.org. I'm kinda dumb
> with the details, but maybe a volunteer can be found to help.

Now that's an answer! Great.

I offer to continue the DNS management, so it's easiest (for me) if I can
do this on my own machine. So, what you would do is add NS records for the
'pool.ntp.org' DNS zone to the name server config for ntp.org, like:

pool IN NS zbasel.fortytwo.ch.
pool IN NS www.ntp.org.
pool IN NS louie.ntp.org.
(and more records for all other secondary nameservers).

To spread the load you'd configure your nameservers as secondary for the
pool.ntp.org zone, in bind version 9 syntax this would be

zone "pool.ntp.org" {
type slave;
masters { 212.254.206.135; };
file "some-cache-file";
};

Then, of course, you'll have to mention the pool.ntp.org zone on the 'list
of public ntp timeservers' page. And, eventually, I think that it could be
included in the default configuration ntpd ships with.

It would be very important to mention everywhere that people with
computers on a static IP and 24*7 connection should join the pool -
spreading the load only helps, as mentioned in another posting, when
there's really enough servers to spread it.

> The scheme might actually work better than you think. The manycast
> scheme can probably be subverted so that finding and configuring the
> best three servers, say out of six, would be completely automatic.

Good idea if this works - that's where my knowledge of ntp stops at the
moment.

I think the only possibility to do any chosing of suitable servers would
be in ntpd itself - requiring DNS magic would be to require the secondary
name servers to run special software, too, or not to run any secondary
name servers. And with caching, it would probably not work at all.

I'll write a newsflash later with more ideas for improvements later today.

cheers

Adrian 'Dagurashibanipal' von Bidder

unread,
Jan 30, 2003, 2:59:26 AM1/30/03
to
Behold! For Simon Lyall declaimed:

> Eric <eje...@spamcop.net> wrote:
>> For a world-wide, standard NTP server-pool based on DNS, I think that
>> getting the naming right, from the start, is critical.
>
> I'm doing a writeup for something that would support this, it will just
> take me a few days since I have to do a couple of tests etc to make sure
> it would scale.
>
> Basicly it would involve playing with the DNS so that a client would get
> the closest hosts. This could allow client software to use the dns names
> automaticly. The scheme I am thinking off would be (example names)

I don't see how you could play with the DNS - most of the DNS servers will
be secondary DNS servers offered for free, they'll not be ready to run
special software (and I'm not in a position to offer DNS service if
there's no secondaries).

And: many (most?) DNS queries do not come from the clients, but from some
in-between DNS servers relaying queries, possibly from other relaying DNS
servers. So the IP of the final client might not be that close to the IP
of the computer actually querying the pool.ntp.org DNS server.

Adrian 'Dagurashibanipal' von Bidder

unread,
Jan 30, 2003, 3:16:39 AM1/30/03
to
Behold! For Nelson Minar declaimed:

> Congratulations on building a specific solution to the server abuse
> problem! I think it's great to try.
>
> Two things that complicate it:
>
> Serving the DNS itself might be expensive
>
> Unless your list of NTP hosts contains several thousand servers, I
> don't really see how this helps.

Many, many ntp servers would be the goal, yes.

> The neat thing about P2P approaches is that your client gets added to
> the pool of round robin servers. I think you could do this on the DNS
> server end; everytime a new IP address resolves pool.ntp.org (or
> whatever), store its IP address somewhere. Wait a day, then run some
> NTP queries on that new IP to see if it's a reasonable clock. Is it
> online? Is the time roughly accurate? If so, automatically add it to
> the list of servers in the round robin.

Problem: the IP querying my nameserver is likely not to be the IP of the
ntpd, but the IP of a caching DNS server.

For a start, I think that just always encouraging people to join when
pool.ntp.org is mentioned could help. In a second stage, some support
within ntpd itself could be built in, having configuration lines like

---
server euro.pool.ntp.org pool 3 join
---

The 'pool 3' would cause ntpd
- to take three servers out of the pool and
- to re-resolve the euro.pool.ntp.org name each time one of the servers
becomes unreachable

The 'join' would cause ntpd to submit its IP address to the pool.
(abusing TEXT DNS records to give the URL of the server responsible for
handling the requests or something like this).

The big problem (apart from somebody having to write that software) would
of course be the load on the servers that handle pool requests, and the
need to monitor the servers. For the second problem, the solution would
probably be to increase the interval each time a server was successfully
verified, like:

t=0 server joins
t + 6h server is verified and added to the DNS zone

and then verify after 12, 24, 48, 96, ... hours. When the server is bad,
queue it, retest after 3h, if it's still bad throw it out.

Also, a notification mechanism could be added to the 'pool' mechanism -
ntpd would notify the pool server if a timeserver goes bad. Problem here
is of course that 300 servers would report a bad server within a very
short time, causing a load spike on that pool server.

Note that this works by only modifying ntpd and setting up clever
management of the pool.ntp.org nameserver - nothing needs to be done on the
secondary name servers.

Ok, but that's ideas for the (far) future.

David Schwartz

unread,
Jan 30, 2003, 4:46:47 AM1/30/03
to
Adrian 'Dagurashibanipal' von Bidder wrote:

> I don't see how you could play with the DNS - most of the DNS servers will
> be secondary DNS servers offered for free, they'll not be ready to run
> special software (and I'm not in a position to offer DNS service if
> there's no secondaries).
>
> And: many (most?) DNS queries do not come from the clients, but from some
> in-between DNS servers relaying queries, possibly from other relaying DNS
> servers. So the IP of the final client might not be that close to the IP
> of the computer actually querying the pool.ntp.org DNS server.

The best solutions I can think of involve giving the client a large
number of servers to choose from. It can then check the large number and
pick the N best. Ideally, if a server went away, it would choose another
one by doing the DNS again.

Perhaps a new server class is needed. It would be specified either by
DNS name or perhaps even a DNS mask (like "ntp[1-9].pool.ntp.org"),
where each name inside the mask could have multiple IPs.

You should be able to configure the maximum number of servers it tries
and the ideal number of servers it tries to keep. Perhaps something
like:

server ntp*.pool.ntp.org try 16 ideal 4 restrict noquery

This would use DNS names starting with 'ntp1.pool.ntp.org' and
continuing until it either found 16 servers or one of the names failed
to resolve. Each of those 16 (or fewer if there aren't 16) servers would
be tested for round trip time (and stratum? offset? stability?)

The best 4 of those would be peered with and would be unrestricted
except for queries. If at any time one of those 4 servers went away, the
testing process would start over to find a replacement server.

DS

Adrian 'Dagurashibanipal' von Bidder

unread,
Jan 30, 2003, 5:09:14 AM1/30/03
to
Behold! For David Schwartz declaimed:

> Perhaps a new server class is needed. It would be specified either by
> DNS name or perhaps even a DNS mask (like "ntp[1-9].pool.ntp.org"),
> where each name inside the mask could have multiple IPs.
>
> You should be able to configure the maximum number of servers it tries
> and the ideal number of servers it tries to keep. Perhaps something
> like:
>
> server ntp*.pool.ntp.org try 16 ideal 4 restrict noquery

You've not seen my posting I wrote in answer to Nelson Minar
(<pan.2003.01.30....@fortytwo.ch>)? I proposed something very
similar - the exact details would have to been worked out and left to the
one actually doing the implementation anyway...

cheers
-- vbi

--
OpenPGP encrypted mail welcme - my key: http://fortytwo.ch/gpg/92082481

Tapio Sokura

unread,
Jan 30, 2003, 3:25:15 PM1/30/03
to
David Schwartz <dav...@webmaster.com> wrote:
> The best solutions I can think of involve giving the client a large
> number of servers to choose from. It can then check the large number and
> pick the N best. Ideally, if a server went away, it would choose another
> one by doing the DNS again.

There's a possible problem here. If some servers are "clearly" better than
others, the clients will obviously prefer them and this could lead to
overload on the "best" servers. Obviously all clients won't pick the same
servers as best due to differing network delays. But there is the risk of
some servers having a lot more clients than others that serve only a little
degraded time.

DNS tricks can probably help here (give out more addresses of those machines
that are less loaded than those with many clients), but automating this in
practice might turn out to be problematic. On the other hand if there are
enough servers in the pool, unequal distribution of clients might not turn
out to be problem at all.

Simon Lyall

unread,
Feb 2, 2003, 7:55:16 AM2/2/03
to
Simon Lyall <simon...@ihug.invalid> wrote:
> Eric <eje...@spamcop.net> wrote:
>> For a world-wide, standard NTP server-pool based on DNS, I think that
>> getting the naming right, from the start, is critical.
> I'm doing a writeup for something that would support this, it will just
> take me a few days since I have to do a couple of tests etc to make sure
> it would scale.

I've now done the writeup at:

http://www.darkmere.gen.nz/2003/0203.html

sorry for the delay.

Simon Lyall

unread,
Feb 2, 2003, 7:58:46 AM2/2/03
to
Adrian 'Dagurashibanipal' von Bidder <middle...@fortytwo.ch> wrote:
> Problem: the IP querying my nameserver is likely not to be the IP of the
> ntpd, but the IP of a caching DNS server.

True but this doesn't appear to be a problem for Akamai and 3DNS so I am
making the assumption that it's not a big problem in real life.

Simon Lyall

unread,
Feb 2, 2003, 8:01:21 AM2/2/03
to
Adrian 'Dagurashibanipal' von Bidder <middle...@fortytwo.ch> wrote:
> I don't see how you could play with the DNS - most of the DNS servers will
> be secondary DNS servers offered for free, they'll not be ready to run
> special software (and I'm not in a position to offer DNS service if
> there's no secondaries).

What I'm proposing is a straightforward feature of bind9 which is fairly
common. The additional software would be a 10-20 line perl program which
would be run on each server in the pool. This would be short and simple
enough to be easily inspected so that the admin would be confident it was
safe.

Adrian 'Dagurashibanipal' von Bidder

unread,
Feb 2, 2003, 10:40:43 AM2/2/03
to
Behold! For Simon Lyall declaimed:

> Adrian 'Dagurashibanipal' von Bidder <middle...@fortytwo.ch> wrote:


>> Problem: the IP querying my nameserver is likely not to be the IP of the
>> ntpd, but the IP of a caching DNS server.
>
> True but this doesn't appear to be a problem for Akamai and 3DNS so I am
> making the assumption that it's not a big problem in real life.

IIRC akamai has actual boxes deployed at the ISPs, they can do that
because the ISP actually notices the drop in traffic. That's not something
we can afford.

cheers
-- vbi


--
There are 3 types of guys -- the ones who hate nerds (all nerds, that
is; girls aren't let off the hook); the ones who are scared off by girls
who are slightly more intelligent than average; and the guys who are
also somewhat more intelligent than average, but are so shy that they
can't put 2 words together when they're within 20 feet of a girl.
-- Vikki Roemer on debian-curiosa

Adrian 'Dagurashibanipal' von Bidder

unread,
Feb 2, 2003, 10:43:57 AM2/2/03
to
Behold! For Simon Lyall declaimed:

> Adrian 'Dagurashibanipal' von Bidder <middle...@fortytwo.ch> wrote:


>> I don't see how you could play with the DNS - most of the DNS servers will
>> be secondary DNS servers offered for free, they'll not be ready to run
>> special software (and I'm not in a position to offer DNS service if
>> there's no secondaries).
>
> What I'm proposing is a straightforward feature of bind9 which is fairly
> common. The additional software would be a 10-20 line perl program which
> would be run on each server in the pool. This would be short and simple
> enough to be easily inspected so that the admin would be confident it was
> safe.

I don't think that I can afford to require dns admins to run a custom
script right now. I'm just happy that I even get some people willing to
donate.

When this grows, these options should be reevaluated - people are usually
more ready to donate to a successful project than they are ready to donate
to a project where it's not clear how it will go.

cheers
-- vbi

--
featured product: GNU Privacy Guard - http://gnupg.org

Simon Lyall

unread,
Feb 2, 2003, 1:10:32 PM2/2/03
to
Adrian 'Dagurashibanipal' von Bidder <middle...@fortytwo.ch> wrote:
> Behold! For Simon Lyall declaimed:
>> Adrian 'Dagurashibanipal' von Bidder <middle...@fortytwo.ch> wrote:
>>> Problem: the IP querying my nameserver is likely not to be the IP of the
>>> ntpd, but the IP of a caching DNS server.
>>
>> True but this doesn't appear to be a problem for Akamai and 3DNS so I am
>> making the assumption that it's not a big problem in real life.
> IIRC akamai has actual boxes deployed at the ISPs, they can do that
> because the ISP actually notices the drop in traffic. That's not something
> we can afford.

I'm not sure how this is relivant since they are still just using the ip
of the DNS server that talks to them. I'm just saying that Akamai use this
technique and don't have major problems with it.

Since we are able to use existing NTP servers we don't need to deploy our
own. So if a customer of mine queries ntp.ntp.org they should get back the
ip of my local NTP server.

Simon Lyall

unread,
Feb 2, 2003, 1:38:18 PM2/2/03
to
Adrian 'Dagurashibanipal' von Bidder <middle...@fortytwo.ch> wrote:
> Behold! For Simon Lyall declaimed:
>> Adrian 'Dagurashibanipal' von Bidder <middle...@fortytwo.ch> wrote:
>>> I don't see how you could play with the DNS - most of the DNS servers will
>>> be secondary DNS servers offered for free, they'll not be ready to run
>>> special software (and I'm not in a position to offer DNS service if
>>> there's no secondaries).
>>
>> What I'm proposing is a straightforward feature of bind9 which is fairly
>> common. The additional software would be a 10-20 line perl program which
>> would be run on each server in the pool. This would be short and simple
>> enough to be easily inspected so that the admin would be confident it was
>> safe.

> I don't think that I can afford to require dns admins to run a custom
> script right now. I'm just happy that I even get some people willing to
> donate.

The script would need to be run by the NTP server admins. People who run
the DNS would just need a configuration to be setup. I doubt we are going
to need a huge number of DNS secondaries, even the larest project's only
have half a dozen.

Here is a quick and dirty version of the script:

#!bin/sh
wget --output-document=/tmp/sites.txt http://www.ntp.org/dns/sites.txt
cat /tmp/sites.txt | xargs -n 1 ping -c 1 | grep "64 bytes" | cut -f4,7 -d" " > /tmp/output.txt
mail -s `hostname -f` res...@ntp.org < /tmp/output.txt
rm /tmp/output.txt /tmp/sites.txt
exit 0

All it does is download a list of ip's , ping each once and then mail the
result back. The remote site just runs that via cron every day. Compared
to the mirror scripts that people run it's pretty simple. I run mirrors
for a couple of websites and both of them have a (write your own mirror
script) policy.

Simon Lyall

unread,
Feb 2, 2003, 2:47:27 PM2/2/03
to
Eric <eje...@spamcop.net> wrote:
> On Tue, 28 Jan 2003 22:53:16 +0000 (UTC), Simon Lyall
>>Basicly it would involve playing with the DNS so that a client would get
>>the closest hosts. This could allow client software to use the dns names
>>automaticly.

> This would be a great improvement / alternative to the more granular
> regional or world-wide server lists suggested in the original post. I
> don't know how to tailor DNS by network distance. Is this a feature
> of current DNS implementations?

Is is builtin feature of Bind version 9 to return different results based
on the originating ip.

>>The scheme I am thinking off would be (example names)
>>
>>ntp.ntp.org CNAME for ntp1.ntp.org closest "open server"
>>ntp1.ntp.org closest "open server"
>>ntp2.ntp.org 2nd closest "open server"
>>ntp3.ntp.org 3rd closest "open server" that is run by a seperate
>> organisation to ntp1 and ntp2
>>ntp4.ntp.org 4rd closest "open server" that is run by a seperate
>> organisation to ntp1, ntp2 and ntp3

> How well does it scale?

I can't see any reason why it shouldn't scale to a couple of thousand
servers. I need to do a few more tests however, I'm not a bind guru.

> The up side is that just by having a standard naming convention, it
> allows for the burden to be distributed much better, and for the
> anticipated growth to be managed, as originally conceived.

A big thing I can see is that it's a lot easier for the shareware NTP
authors to do the right thing rather than pinck someones servers at
random. We can do the work of finding the best servers for them to pick.

Possible at the same time they can fix their bugs that cause the abusive
behavour.

David Schwartz

unread,
Feb 3, 2003, 4:21:58 PM2/3/03
to
Tapio Sokura wrote:

> David Schwartz <dav...@webmaster.com> wrote:

> > The best solutions I can think of involve giving the client a large
> > number of servers to choose from. It can then check the large number and
> > pick the N best. Ideally, if a server went away, it would choose another
> > one by doing the DNS again.

> There's a possible problem here. If some servers are "clearly" better than
> others, the clients will obviously prefer them and this could lead to
> overload on the "best" servers. Obviously all clients won't pick the same
> servers as best due to differing network delays. But there is the risk of
> some servers having a lot more clients than others that serve only a little
> degraded time.

If you're willing to make more significant protocol changes, each
server can send the client a number reflecting its 'willingness' to
accept clients. As load increases, 'willingness' can go down. Also,
weighting the selection algorithm more towards low RTT than good
stability/offset will help.



> DNS tricks can probably help here (give out more addresses of those machines
> that are less loaded than those with many clients), but automating this in
> practice might turn out to be problematic. On the other hand if there are
> enough servers in the pool, unequal distribution of clients might not turn
> out to be problem at all.

My personal preference is to build this into the protocol you're using
rather than into DNS. I even like the idea of servers keeping lists of
other servers to send to clients and using DNS only for bootstrapping.

We don't have to live with the limits of DNS, why should we?

DS

Adrian 'Dagurashibanipal' von Bidder

unread,
Feb 4, 2003, 4:10:47 AM2/4/03
to
Behold! For Simon Lyall declaimed:

>


> The script would need to be run by the NTP server admins. People who run
> the DNS would just need a configuration to be setup. I doubt we are going
> to need a huge number of DNS secondaries, even the larest project's only
> have half a dozen.

Ok, I misunderstood. Does it have to be run on *all* ntp servers, or is
there a way to hack around some servers not willing to run the script? I
still have problems forcing people donating resources to additionally run
a script (in fact, I know that at least 3 servers that currently are on
the list don't want to be bothered by me - they said I could add their
servers, but they won't do any more).



> Here is a quick and dirty version of the script:
>
> #!bin/sh
> wget --output-document=/tmp/sites.txt http://www.ntp.org/dns/sites.txt
> cat /tmp/sites.txt | xargs -n 1 ping -c 1 | grep "64 bytes" | cut -f4,7 -d" " > /tmp/output.txt
> mail -s `hostname -f` res...@ntp.org < /tmp/output.txt
> rm /tmp/output.txt /tmp/sites.txt
> exit 0
>
> All it does is download a list of ip's , ping each once and then mail the
> result back. The remote site just runs that via cron every day. Compared
> to the mirror scripts that people run it's pretty simple. I run mirrors
> for a couple of websites and both of them have a (write your own mirror
> script) policy.

I hop you don't misunderstand me - it certainly would be fine to pick
better ntp servers than just random ones. My fundamental problem is that I
feel that many people won't join if they need to run extra software (and
on a router that just happens to run ntp you probably can't, so it'd have
to be on another box close by, making the setup more fragile).

cheers
-- vbi


--
featured link: http://fortytwo.ch/gpg/intro

Eric

unread,
Feb 5, 2003, 1:20:00 PM2/5/03
to
On Tue, 04 Feb 2003 10:10:47 +0100, "Adrian 'Dagurashibanipal' von

Bidder" <middle...@fortytwo.ch> wrote for the entire planet to see:

>Behold! For Simon Lyall declaimed:


>
>>
>> The script would need to be run by the NTP server admins. People who run
>> the DNS would just need a configuration to be setup. I doubt we are going
>> to need a huge number of DNS secondaries, even the larest project's only
>> have half a dozen.
>
>Ok, I misunderstood. Does it have to be run on *all* ntp servers, or is
>there a way to hack around some servers not willing to run the script? I
>still have problems forcing people donating resources to additionally run
>a script (in fact, I know that at least 3 servers that currently are on
>the list don't want to be bothered by me - they said I could add their
>servers, but they won't do any more).

I think he's saying the scripts are run on the pool. zone secondary
DNS machines, not on the NTP servers whose IPAs are being handed out,
or by the NTP clients making the DNS request.

- Eric

Simon Lyall

unread,
Feb 5, 2003, 6:30:43 PM2/5/03
to
Adrian 'Dagurashibanipal' von Bidder <middle...@fortytwo.ch> wrote:
> Behold! For Simon Lyall declaimed:
>> The script would need to be run by the NTP server admins. People who run
>> the DNS would just need a configuration to be setup. I doubt we are going
>> to need a huge number of DNS secondaries, even the larest project's only
>> have half a dozen.
> Ok, I misunderstood. Does it have to be run on *all* ntp servers, or is
> there a way to hack around some servers not willing to run the script? I
> still have problems forcing people donating resources to additionally run
> a script (in fact, I know that at least 3 servers that currently are on
> the list don't want to be bothered by me - they said I could add their
> servers, but they won't do any more).

The script would have to run on the majority. Basicly when a new NTP
server joins you have to have a good way to determine which of the 20,000 (say)
networks on the Internet are closer to it and should use it rather than
other servers.

The easiest way is to run the script on that server and directly check the
distance.

Alternatively (1) You could run something at the client end that checks
the new server's distance, in theory however you would every client to run
this or perhaps just one client in each of the 20,000 networks.

Or (2) you could only have the new server be used by clients you *know*
are local. These would be those at the same ISP or that you otherwise
determined were close (on an ad hoc basis).

>> All it does is download a list of ip's , ping each once and then mail the
>> result back. The remote site just runs that via cron every day. Compared
>> to the mirror scripts that people run it's pretty simple. I run mirrors
>> for a couple of websites and both of them have a (write your own mirror
>> script) policy.

> I hop you don't misunderstand me - it certainly would be fine to pick
> better ntp servers than just random ones. My fundamental problem is that I
> feel that many people won't join if they need to run extra software (and
> on a router that just happens to run ntp you probably can't, so it'd have
> to be on another box close by, making the setup more fragile).

In which case you have to have more intellegence on each client in order
for it to pick the best few servers out of the list you give it.

I don't think this is going to work for the shareware Windows ntp clients
which are probably the main source of the current problems.

I think we have 3 main choices:

1. Return list of remote servers to the client that may or may not be
close.

2. Place intellegence on servers to determine their topology and thus we
can return the closest ones to the client.

3. Return a large list of servers to the client and assume it's smart
enough to work out the closest.

I think (3) is unrealist and (1) may yeld servers so distance and
non-optimal for some clients that people won't trust it.

Simon Lyall

unread,
Feb 5, 2003, 6:33:30 PM2/5/03
to
Eric <eje...@spamcop.net> wrote:
> I think he's saying the scripts are run on the pool. zone secondary
> DNS machines, not on the NTP servers whose IPAs are being handed out,
> or by the NTP clients making the DNS request.

The script will be run on the NTP servers (or machines near them). One
central machine will collect and process the results into a DNS zone
file(s) and this will be given to the secondary DNS servers for the zone.

Tim Hogard

unread,
Feb 11, 2003, 7:40:23 AM2/11/03
to
Simon Lyall (simon...@ihug.invalid) wrote:

: Simon Lyall <simon...@ihug.invalid> wrote:
: > Eric <eje...@spamcop.net> wrote:
: >> For a world-wide, standard NTP server-pool based on DNS, I think that
: >> getting the naming right, from the start, is critical.
: > I'm doing a writeup for something that would support this, it will just
: > take me a few days since I have to do a couple of tests etc to make sure
: > it would scale.
: I've now done the writeup at:
: http://www.darkmere.gen.nz/2003/0203.html

That will work fine for most protocols however it does have
one problem (and it will bite with NTP).

If I configure NTP to use a name on some devices (not the typical
xntpd based ones) what happens is that the device will pick a DNS
server and convert that to a ip number. It will do the NTP stuff
with that ip address and store the details with the name, not the
number. Then latter ti will poll again, once again looking up the
ip number but using some info from the last connection. If the
addresses keep flopping about, it may decide the server is messed
up and give up.

There are also cases like Australia where one provider covers most
of the country but there are enough other people who don't like per
megabyte charges and get their data elsewhere and thouse IP address
appear to be in the US even though they have latency that shows
they are not and their routing table details aren't public so the
script will make bad guesses about whats connected to what. The
result is some of thouse addresses can ping an telstra address in
sydney in 24ms but the public route tables show that the only
destination for that range in the US.

For most protocols, and most networks, this should work better
than the current system. Something you might want to look into
is the uucp maps from long ago and some of their issues. This
problem isn't new, its just a different topography.

-tim
http://web.abnormal.com

David L. Mills

unread,
Feb 11, 2003, 1:29:07 PM2/11/03
to
Tim,

You show exactly the reasons why NTP MANYcast was designed rather than
contrived anycast. The manycast paradigm starts with the "nearest" in
ttl hops looking for a number of servers, then narrows the choice using
engineered mitigation algorithms. If a sufficient number of servers have
not been found, it expands the ring by one hop, includes the new servers
found and tries again. To the extent your model fits this paradigm, it
should operate in the same way. In the USA routing tables make no
difference. From Delaware it is 20 hops to New York and 10 hops to
Lozangeles.

For years I have strongly encouraged folks to include geographic
coordinates in the public list entries and many have done so. I've also
considered a NTPv4 extension field to return this information and
another to return a list of the current servers, along with metrics.
I've also considered the use of pathchar to characterize a path. That
could be done in real time as a coarse filter.

Having said all this, I am still concerned about the rules of
engagement, since until some kind of access policy can be confirmed,
folks on the public server list are not going to participate.

Dave

Adrian 'Dagurashibanipal' von Bidder

unread,
Feb 12, 2003, 7:01:51 AM2/12/03
to
Behold! For David L. Mills declaimed:

> Having said all this, I am still concerned about the rules of


> engagement, since until some kind of access policy can be confirmed,
> folks on the public server list are not going to participate.

Just to remind you - at the early stage where the project now is, all
that's running are a global DNS round robin, with continental-scope
regional zones being added. Access control is just not possible right now.
But with enough servers participating, net load should stay low enough to
be barely noticeable on busy sites.

I just don't see the need for access control - with DNS round robin, the
case of everybody just hitting the same server should never happen, so
load problems should soon belong to the past, surly?

cheers
-- vbi

--
featured product: PostgreSQL - http://postgresql.org

David L. Mills

unread,
Feb 12, 2003, 9:06:56 AM2/12/03
to
Adrian,

You scratch a serious itch. See the access restrictions in the list of
public servers. You don't get to question the motivation for these
restrictions. It is their wish and must surely be respected. I say
again, unless some way is found to express these rules, the public list
operators will not join your project. Certainly, the UDel servers will
not join until the rules can be expressed.

What I would like to hear is duscussion on technical means to express
these rules so that an intelligent robot could select accordingly.

Dave

Adrian 'Dagurashibanipal' von Bidder wrote:
>

Danny Mayer

unread,
Feb 12, 2003, 10:43:02 AM2/12/03
to
"Adrian 'Dagurashibanipal' von Bidder" <va...@fortytwo.ch> wrote in message news:<pan.2003.02.12....@fortytwo.ch>...

The load issue is not with the DNS, it's with the NTP servers. Not only
do they need to be able to handle any load, they need to be able to
decide to remove themselves from the list and after a short time have
the clients stop making requests. Most people will keep their systems
running for months and they will never recheck to see if the address is
still in the pool list. Even if the NTP Server is stopped, clients will still
be attempting to use it. This generates a lot of IP traffic even if it gets
constant failures. We know because a lot of the Stratum 1 servers
see this. What really needs to happen is to ensure that the clients
are well behaved and check regularly for valid servers.

Danny
>
> cheers
> -- vbi

Adrian 'Dagurashibanipal' von Bidder

unread,
Feb 12, 2003, 2:40:50 PM2/12/03
to
Behold! For David L. Mills declaimed:

> Adrian,


>
> You scratch a serious itch. See the access restrictions in the list of
> public servers. You don't get to question the motivation for these
> restrictions. It is their wish and must surely be respected. I say
> again, unless some way is found to express these rules, the public list
> operators will not join your project. Certainly, the UDel servers will
> not join until the rules can be expressed.
>
> What I would like to hear is duscussion on technical means to express
> these rules so that an intelligent robot could select accordingly.

What I *can* see is adding country zones if there is enough servers per
country (since most restrictions are probably per country) - but I have
yet to hear a satisfying solution for the problem of countries
without/with too few servers (just CNAME them to the continental/global
zone?). If country zones are to be added, then all countries should have
their zone, to make autoconfiguration for installers easy.

If country zones are in place, timeservers operators' wishes to be added
only to the country zone or only to the country + continental zone can
probably be respected.

cheers
-- vbi

>
> Dave
>
> Adrian 'Dagurashibanipal' von Bidder wrote:
>>
>> Behold! For David L. Mills declaimed:
>>
>> > Having said all this, I am still concerned about the rules of
>> > engagement, since until some kind of access policy can be confirmed,
>> > folks on the public server list are not going to participate.
>>
>> Just to remind you - at the early stage where the project now is, all
>> that's running are a global DNS round robin, with continental-scope
>> regional zones being added. Access control is just not possible right now.
>> But with enough servers participating, net load should stay low enough to
>> be barely noticeable on busy sites.
>>
>> I just don't see the need for access control - with DNS round robin, the
>> case of everybody just hitting the same server should never happen, so
>> load problems should soon belong to the past, surly?
>>
>> cheers
>> -- vbi
>>
>> --
>> featured product: PostgreSQL - http://postgresql.org

--
get my gpg key here: http://fortytwo.ch/gpg/92082481

Adrian 'Dagurashibanipal' von Bidder

unread,
Feb 12, 2003, 2:46:04 PM2/12/03
to
Behold! For Danny Mayer declaimed:

> "Adrian 'Dagurashibanipal' von Bidder" <va...@fortytwo.ch> wrote in message news:<pan.2003.02.12....@fortytwo.ch>...
>> Behold! For David L. Mills declaimed:

> The load issue is not with the DNS, it's with the NTP servers. Not only


> do they need to be able to handle any load, they need to be able to
> decide to remove themselves from the list and after a short time have

Right now, it is all email based - there's my email address on the project
homepage, and I will promptly remove servers if asked to do so. With the
problem you describe below, a few days delay will not make a difference.
(Yes, a web interface definitely will come if the project grows).

> the clients stop making requests. Most people will keep their systems
> running for months and they will never recheck to see if the address is
> still in the pool list. Even if the NTP Server is stopped, clients will still
> be attempting to use it. This generates a lot of IP traffic even if it gets
> constant failures. We know because a lot of the Stratum 1 servers
> see this. What really needs to happen is to ensure that the clients
> are well behaved and check regularly for valid servers.

I think this would be outside the scope of this project - and you can't
really force all programmers who might one day write an ntp client to
write a well behaved ntp client. Probably the RFC should specify that a
server stops querying a server alltogether after it has been unreachable
for x days (or at least try to resolve the hostname freshly, so IP changes
would be noticed, and unused nameservers could be redirected to 127.0.0.1).
The possibility of (D)DoS attacks is one thing we just have to live with.

My DNS project won't change anything with these issues, though.

cheers
-- vbi

David L. Mills

unread,
Feb 12, 2003, 10:52:46 PM2/12/03
to
Adrian,

Interesting thought. Should the kiss-of-death become an RFC standard and
conformance the price of admission? It's reasonably close to the ICMP
port unreasonable semantics. We could call it NTP Source Quench. My,
does that concept shake up a lot of old Internet adventures.

Dave

Adrian 'Dagurashibanipal' von Bidder wrote:
>

0 new messages