Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

DNS Benchmark

278 views
Skip to first unread message

Bear

unread,
Apr 14, 2012, 2:15:05 PM4/14/12
to
Ha, I was right about my assertion about DNS Benchmarks custom list
results. I've received a reply from the developer confirming it. So my
image link displaying those results from the custom list build was dead on.
The custom list does compare about 5000 DNS servers and returns the 50
fastest from your location along with the speed of each one...and
Symantec's was the fastest. Running the benchmark is not necessary to
further justify my conclusions.


Dustin's blue line comments are after that fact benchmark testing using
that list which changes during the day as server loads vary and it is
server loads and server performance that in the end determine the speed of
the service...not electricity (lol @ Pooh & Dustin). Overall, the
benchmarks show Cox and Semantec changing positions throughout the day with
Cox slowing down more in the evenings and week-ends which makes sense.


--
Bear
http://bearware.info
The real Bear's header path is:
news.sunsite.dk!dotsrc.org!filter.dotsrc.org!news.dotsrc.org!not-for-mail

Bear

unread,
Apr 14, 2012, 2:25:35 PM4/14/12
to
Bear <bearbo...@gmai.com> wrote in
news:XnsA03586CC9FBC0be...@130.225.254.104:

> Overall, the benchmarks show Cox and Semantec changing positions
> throughout the day with Cox slowing down more in the evenings and
> week-ends which makes sense.
>
>
FWIW we are talking about less than 1/4 second difference so in most cases
and unless your ISP has horrible servers or node distribution it isn't
enough reason to change DNS servers.

What is a good reason is the additional services a DNS server may provide
as in the case of Norton ConnectSafe, Google DNS and Open DNS...in the best
to least effective order.
Message has been deleted

Mike Easter

unread,
Apr 14, 2012, 4:55:31 PM4/14/12
to
Bear wrote:
> Ha, I was right about my assertion about DNS Benchmarks custom list
> results. I've received a reply from the developer confirming it. So my
> image link displaying those results from the custom list build was dead on.
> The custom list does compare about 5000 DNS servers and returns the 50
> fastest from your location along with the speed of each one...and
> Symantec's was the fastest. Running the benchmark is not necessary to
> further justify my conclusions.

Just to clarify my understanding of the use of dnsbench, which is at
some variance from what I understand you to be saying.

In the beginning there was a thread^1 here - DNS Proxy - which went on
for about 90 messages during which Gibson's DNS Benchmark was mentioned
by David Lipman to help resolve disputes about DNS speeds and such.

Gibson's site and the dnsbench tool itself have a lot of information
about how to use the tool. The default configuration contains about 70+
nameservers (which are selected based on US-centric criteria) plus those
configured in the user's system.

The user can /optionally/ also run a feature to 'scan' almost 5000
nameservers worldwide (as opposed to US-centric) to auto-choose/make a
custom list of the 50 'closest' to the user of the tool. Naturally/Of
course/ the user can also add servers of hir choice to total a
configuration of up to 200 benchmarkable servers.

Gibson describes this custom list creation process both with a function
of the tool annunciation and also here^2. Basically the benchmarking
takes place for the 70+ default resolvers and the user's (before any
custom list is created) or for the 50 'custom' and the user's if the
custom list is created. So in addition to the default list benchmarking
built in, the user can initiate the custom list creation which takes
over 30 minutes and then run that benchmark.

In addition to all of the above, the tool also feeds back to the Gibson
site data which has been obtained by the users to thus create a ranking
of the 200 fastest servers found by those numerous users who run this
benchmark to aid Gibson to keep a database file of those 200 to aid in
the evolution of the default list for the tool. That .csv file is also
available^3 for anyone to access.


^1 GG link to thread
http://groups.google.com/group/alt.comp.freeware/browse_thread/thread/2f63cc4f20d7098f?hl=en&tvc=2


^2 Building a Custom Resolver List http://www.grc.com/dns/custom-list.htm

^3 Fastest 200 http://www.GRC.com/dns/resolvers.csv



--
Mike Easter

Bear

unread,
Apr 14, 2012, 5:29:01 PM4/14/12
to
Mike Easter <Mi...@ster.invalid> wrote in news:9uu6i2FopuU1
@mid.individual.net:

> The user can /optionally/ also run a feature to 'scan' almost 5000
> nameservers worldwide (as opposed to US-centric) to auto-choose/make a
> custom list of the 50 'closest' to the user of the tool. Naturally/Of
> course/ the user can also add servers of hir choice to total a
> configuration of up to 200 benchmarkable servers.

The custom list scans about 5000 nameservers and returns the 50 fastest,
not closest. The list returns include the speed of each.

The list I posted of the 50 nameservers chosen was the results of that
scan sorted by fastest first. That list is also placed as an ini file in
the programs folder to be used in future benchmark testing...which
produces the "blue line" Dustin was speaking about.

I do not find any of this very important as the differences are less
than 1/4 second which is quite insignificant.

What is more important are the additional services Norton ConnectSafe,
Google DNS and Open DNS provide over your ISP's DNS service...which is
basically filtering known bad actors and more if you choose those other
options...which I don't like.

What Dustin and Pooh were doing is more akin to highjacking a thread and
trolling over a pedantic point which they made concerning electricity
determined which would be fastest, which is ludicrous.

Mike Easter

unread,
Apr 14, 2012, 6:15:34 PM4/14/12
to
Bear wrote:
> Mike Easte

>> The user can /optionally/ also run a feature to 'scan' almost 5000
>> nameservers worldwide (as opposed to US-centric) to auto-choose/make a
>> custom list of the 50 'closest' to the user of the tool. Naturally/Of
>> course/ the user can also add servers of hir choice to total a
>> configuration of up to 200 benchmarkable servers.
>
> The custom list scans about 5000 nameservers and returns the 50 fastest,
> not closest. The list returns include the speed of each.

I put 'closest' in quotes because Gibson uses the term closest in his
description and because the mechanism of the preliminary 'testing' of
the 5000 is going to sort the 'fastest' preliminary results based on
'network closest'.

Here's Here are Steve's words:

<SG> As you have seen, GRC's DNS Benchmark contains a built-in list of
well-known public DNS resolvers. But since a DNS resolver's performance
is largely determined by its distance from its user, no preset list can
be optimum for everyone. A resolver that is fast for someone in London
will be slow for someone in New York or Bangkok.

The Benchmark resolves this through its ability to create a customized
list of the closest 50 DNS resolvers for every user. By quickly scanning
a global list of 4,849 DNS resolvers, a file named "DNSBENCH.INI",
containing the IP addresses of the "closest 50" resolvers will be
created in the same directory as the Benchmark's executable file. This
file will supply the IP addresses to be tested during subsequent DNS
benchmarking. </SG>

> The list I posted of the 50 nameservers chosen was the results of that
> scan sorted by fastest first.

Fastest in the context = 'network closest' since network proximity is
the most important characteristic in this preliminary assessment.



--
Mike Easter

Bear

unread,
Apr 14, 2012, 6:51:03 PM4/14/12
to
Mike Easter <Mi...@ster.invalid> wrote in
news:9uub85...@mid.individual.net:
Explain why you think Symantec was listed as fastest as it was not the
closest to me? Nor was Covad Communication, THEPLANET.com, or YMAX all
listed with faster times before Cox whose servers are the closest to me?

I know the time difference is insignificant, but I'm trying to sort out
some inconsistancies discussed about this topic. I hope you can help as
I do trust your honesty.

Mike Easter

unread,
Apr 14, 2012, 8:34:44 PM4/14/12
to
Bear wrote:
> Mike Easter

>> The Benchmark resolves this through its ability to create a customized
>> list of the closest 50 DNS resolvers for every user.

>>> The list I posted of the 50 nameservers chosen was the results of
>>> that scan sorted by fastest first.
>>
>> Fastest in the context = 'network closest' since network proximity is
>> the most important characteristic in this preliminary assessment.

> Explain why you think Symantec was listed as fastest as it was not the
> closest to me?

The preliminary listing is almost instantaneous for each server.

The comprehensive test takes 'quite a while' just to do 50-70 servers.

The preliminary listing of 'closest' is not the 'take home' result. The
take home result is the result of the benchmark.

> Nor was Covad Communication, THEPLANET.com, or YMAX all
> listed with faster times before Cox whose servers are the closest to me?

If I were going to be comparing DNS, I would be comparing the overall
result of the benchmark testing, not the order of the preliminary 'scan'
for network proximity.

> I know the time difference is insignificant, but I'm trying to sort out
> some inconsistancies discussed about this topic. I hope you can help as
> I do trust your honesty.

I presume that your process was to -1- create the custom list via the
tool's mechanism -2- add any servers you were interested in which
weren't on the list and then most importantly -3- run the benchmark on
that custom list + IPs/DNS of particular interest and then -4- discuss
or post the specific profile that the tool provides after the
comprehensive benchmark for each of the servers you want to compare or
talk about.


--
Mike Easter

Mark Warner

unread,
Apr 14, 2012, 8:44:13 PM4/14/12
to
Mike Easter wrote:
>
> I presume that your process was to -1- create the custom list via the
> tool's mechanism -2- add any servers you were interested in which
> weren't on the list and then most importantly -3- run the benchmark on
> that custom list + IPs/DNS of particular interest and then -4- discuss
> or post the specific profile that the tool provides after the
> comprehensive benchmark for each of the servers you want to compare or
> talk about.

You mean actually researching? Testing per the protocol? Drawing logical
conclusions based on the test results?

Surely you jest.

--
Mark Warner
MEPIS Linux
Registered Linux User #415318
...lose .inhibitions when replying

Bear

unread,
Apr 14, 2012, 8:54:20 PM4/14/12
to
Mike Easter <Mi...@ster.invalid> wrote in
news:9uujd0...@mid.individual.net:
I wasn't really interested in it at all nor was I concerned with DNS
Benchmark until Pooh and Dustin said electricity would determine who was
fastest meaning the closest. My post was about Norton ConnectSafe and
their description of their product...which is DNS Filtering. Dustin and
Pooh created the side show.

Anyway, when you compile the list, here is what is described:

"This GRC DNS Benchmark is scanning 4,849 global DNS resolvers to
determine whether they are accessible and responsive from your present
location. If so, the Benchmark measures the resolver's minimum response
time, as well as whether it appears to be operating reliably and
correctly.

While this is underway, every qualifying resolver is dynamically
"ranked" from fastest to slowest. The top line of the display above
shows the minimum response time of the fastest resolver found so far, as
well as the minimum response time of the "50th fastest."

I posted a link to the screenshot of those results afterwards. Seems
pretty clear cut to me. Of course Dustin did none of that...nor did
Pooh. They were actually just trolling the thread.

Bear

unread,
Apr 14, 2012, 8:55:29 PM4/14/12
to
Mark Warner <mhwarner.i...@gmail.com> wrote in news:9uujurFgefU1
@mid.individual.net:

> Mike Easter wrote:
>>
>> I presume that your process was to -1- create the custom list via the
>> tool's mechanism -2- add any servers you were interested in which
>> weren't on the list and then most importantly -3- run the benchmark on
>> that custom list + IPs/DNS of particular interest and then -4- discuss
>> or post the specific profile that the tool provides after the
>> comprehensive benchmark for each of the servers you want to compare or
>> talk about.
>
> You mean actually researching? Testing per the protocol? Drawing logical
> conclusions based on the test results?
>
> Surely you jest.
>

I know you find that remarkable as I've seen you do none such thing ever.
BTW, your pets were proven wrong...as I knew they were.

Mike Easter

unread,
Apr 14, 2012, 8:56:06 PM4/14/12
to
Mike Easter wrote:

> -4- discuss or post the specific profile that the tool provides after
> the comprehensive benchmark for each of the servers you want to
> compare or talk about.

... where profile consists of a lot of different elements, with 3
different elements just in the class 'response time' -- consisting of
cached, uncached, and dotcom, which Steve explains why each is important
and how the 3 are weighed in comparing server response time.

Even tho' the cached is the fastest and most important/frequent
response, Steve explains why one might want to look at the uncached
ranking first rather than the cached... "Sorting by Green — This
uncached measure of performance is important enough that you might wish
to view the entire DNS server list sorted by fastest uncached
performance first, rather than fastest cached performance."

Also why dotcom response is also weighed.

If one is going to argue about the fine points of DNS benchmarking, I
think it is very important to thoroughly digest what all Steve says
about his tool's purposes and interpretations.

--
Mike Easter

Bear

unread,
Apr 14, 2012, 9:00:24 PM4/14/12
to
Mike Easter <Mi...@ster.invalid> wrote in news:9uukl4FlqgU1
@mid.individual.net:

> If one is going to argue about the fine points of DNS benchmarking, I
> think it is very important to thoroughly digest what all Steve says
> about his tool's purposes and interpretations.

I wasn't arguing about DNS Benchmarking. I was laughing at Pooh and Dustin
saying electricity determined the fasted...meaning the closest would be the
fastest.

At any rate the difference is less than 1/4 second as I have said and a
silly argument and test...unless of course you have a piss poor ISP.

Mike Easter

unread,
Apr 14, 2012, 9:18:20 PM4/14/12
to
Bear wrote:
> Mike Easter
>> Bear wrote:
>>> Mike Easter
>>
>>>> The Benchmark resolves this through its ability to create a
>>>> customized list of the closest 50 DNS resolvers for every
>>>> user.
>>
>>>>> The list I posted of the 50 nameservers chosen was the
>>>>> results of that scan sorted by fastest first.

That preliminary scan is not a/the benchmark

>>>> Fastest in the context = 'network closest' since network
>>>> proximity is the most important characteristic in this
>>>> preliminary assessment.
>>
>>> Explain why you think Symantec was listed as fastest as it was
>>> not the closest to me?

I am not looking at whatever you were looking at when you configured it
however you configured it, but I can comment on the order of my
dnsbench.ini which the tool saves after making its list.

The .ini file which is saved is not ordered by fastest. The top of the
ini list is the order of my system's first two DNS servers. The rest of
the ini list is the servers which the tool considered network closest to
me ordered by their IP address, smallest to largest.

>> If I were going to be comparing DNS, I would be comparing the
>> overall result of the benchmark testing, not the order of the
>> preliminary 'scan' for network proximity.

>> -3- run the benchmark on that custom list

> Anyway, when you compile the list, here is what is described:

I know what that description says about the listing.

> "This GRC DNS Benchmark is scanning 4,849 global DNS resolvers to
> determine whether they are accessible and responsive from your present
> location. If so, the Benchmark measures the resolver's minimum response
> time, as well as whether it appears to be operating reliably and
> correctly.

That is not the benchmark.

> While this is underway, every qualifying resolver is dynamically
> "ranked" from fastest to slowest. The top line of the display above
> shows the minimum response time of the fastest resolver found so far, as
> well as the minimum response time of the "50th fastest."

That is not the benchmark test.

> I posted a link to the screenshot of those results afterwards. Seems
> pretty clear cut to me.

That order of/during the creation of the custom .ini list is not (yet)
the result of the benchmark of the servers which were put on that list.

In order to get the benchmark, you should run/click the button 'Run
Benchmark' and wait a while for all of the 50 servers to be tested.
Then you can interpret all of their little colored icons and rank their
various response times and note any problems or unwanted traits they may
have according to Gibson's Tabular Data and Conclusions sections.


--
Mike Easter

Mike Easter

unread,
Apr 14, 2012, 9:29:58 PM4/14/12
to
Bear wrote:
> Mike Easter

>> what all Steve says about his tool's purposes and interpretations.

> At any rate the difference is less than 1/4 second as I have said and a
> silly argument and test...

I don't agree that it is a silly test.

You might be surprised at the information you get.

What you could do would be to start the dnsbench; it will use the .ini
file it created earlier and your own configured DNS. After the
benchmarking is complete, you can go to the tab Conclusions.

I suspect that Steve's advice about comparing your chosen DNS servers to
the others which were benchmarked might hold some surprises for you.

There are 7 different classes of green checks or red Xes in which
Steve's tool compares your nameservers to the others; and the results
are about more than speed/response time which is only 1 item.


--
Mike Easter

Bear

unread,
Apr 14, 2012, 9:37:44 PM4/14/12
to
Mike Easter <Mi...@ster.invalid> wrote in news:9uulupFtecU1
@mid.individual.net:
I suppose you need to read what it says about compiling the custom list
again. It's a direct quote. The ini file is in the order the custom list
determined and when it first posted the final list, that list contained
the speed of each with the fastest on top.

So if 1/4 a second is important to you keep trying to make it say what
you want it to say and not what I quoted.

Bear

unread,
Apr 14, 2012, 9:41:38 PM4/14/12
to
Mike Easter <Mi...@ster.invalid> wrote in
news:9uumkj...@mid.individual.net:
It's a silly test. We disagree. Dustin and Pooh disagree with you too,
and me...imagine that. They say the closest server is always going to be
the fastest. I disagree with that. Imagine that. Such is what the
argument was about...silly shit.

Now what the post was about is a different story. DNS filtering can be
very beneficial.

p-0^0-h the cat

unread,
Apr 14, 2012, 9:50:09 PM4/14/12
to
On 15 Apr 2012 01:00:24 GMT, Bear <bearbo...@gmai.com> wrote:

>I was laughing at Pooh and Dustin
>saying electricity determined the fasted...meaning the closest would be the
>fastest.

Then cite the MID where I ever said that.

I said this, and I stand by this statement.

Message-ID: <bv5rn7lqn85qoe3n3...@4ax.com>

"The speed of return of a DNS query depends upon

Speed of network between client and server

Load on the server at the time of query

Whether the server has the record cached"

Try standing by your statements Bottom.

Cite one MID that backs your lies Bottom.

Come on Bottom.

Cite, you time wasting, low rent, imbecile.

--
p-0^0-h the cat
Internet Terrorist, Mass Sock puppeteer, Agent provocateur, Gutter rat
Devil incarnate, Linux user#666, BaStarD hacker

Mike Easter

unread,
Apr 14, 2012, 10:00:52 PM4/14/12
to
Bear wrote:
> Mike Easter

>> That preliminary scan is not a/the benchmark

> I suppose you need to read what it says about compiling the custom list
> again.

You bottom posters have the same kind of problem reading and
comprehending what you reply to as the top posters do.

> It's a direct quote.

It isn't a direct quote about the actual benchmarking.

> The ini file is in the order the custom list
> determined and when it first posted the final list, that list contained
> the speed of each with the fastest on top.

Only the 'fastest' nearest by the process of the mechanism of the list
compilation, not the 'fastest' by any parameter of the actual
benchmarking, which hasn't yet been done at the time of the creation of
the list. Closer to a ping than a DNS benchmark assessment.

> So if 1/4 a second is important to you keep trying to make it say what
> you want it to say and not what I quoted.

It is beginning to appear to me that you interpreted something rashly or
prematurely. The order of the identified network closest DNS servers is
not the benchmark which the test was designed to do but simply a process
by which the tool can select 50 servers out of 5000 to test
comprehensively instead of the 70 servers which are not selected based
on their network proximity (relating to speed) to the specific user.

Somehow you aren't getting it.


--
Mike Easter

p-0^0-h the cat

unread,
Apr 14, 2012, 10:12:06 PM4/14/12
to
On 15 Apr 2012 01:37:44 GMT, Bear <bearbo...@gmai.com> wrote:

>I suppose you need to read what it says about compiling the custom list
>again. It's a direct quote. The ini file is in the order the custom list
>determined and when it first posted the final list, that list contained
>the speed of each with the fastest on top.

The chances of the screenshot you posted representing the order of fastest first is
something approaching 1 to the power of every atom in the universe.

Just look at your screenshot.

http://bearware.info/screenshots/DNSBenchmark000.png

Do you really think if you test 5000 DNS servers they are ever going to rank in that
order. So neatly. Dunce you are Bottom.


>So if 1/4 a second is important to you keep trying to make it say what
>you want it to say and not what I quoted.

--

Mark Warner

unread,
Apr 14, 2012, 10:23:51 PM4/14/12
to
Mike Easter wrote:
>
> If one is going to argue about the fine points of DNS benchmarking, I
> think it is very important to thoroughly digest what all Steve says
> about his tool's purposes and interpretations.

That's just cruel.

p-0^0-h the cat

unread,
Apr 14, 2012, 10:29:53 PM4/14/12
to
On 15 Apr 2012 01:41:38 GMT, Bear <bearbo...@gmai.com> wrote:

>It's a silly test. We disagree. Dustin and Pooh disagree with you too

Dustin and Pooh don't need you to tell people who they agree, or disagree with.

>and me...imagine that.

That's not difficult to imagine.

>They say the closest server is always going to be
>the fastest.

No, I said the closer the server, the more likely it is to return a query the fastest.

> I disagree with that. Imagine that.

You have yet to supply a credible logical argument that counters that statement.

>Such is what the
>argument was about...silly shit.

The argument was about your silly statement that Symantec servers were the fastest and
gave the most valid results. I proved that statement was wrong.

Bear

unread,
Apr 14, 2012, 10:33:16 PM4/14/12
to
Mike Easter <Mi...@ster.invalid> wrote in
news:9uuoei...@mid.individual.net:

> Bear wrote:
>> Mike Easter
>
>>> That preliminary scan is not a/the benchmark
>
>> I suppose you need to read what it says about compiling the custom
>> list again.
>
> You bottom posters have the same kind of problem reading and
> comprehending what you reply to as the top posters do.
>
>> It's a direct quote.
>
> It isn't a direct quote about the actual benchmarking.

Of course not. It's a direct quote about compiling the custom list and
is very clear about it.

>
>> The ini file is in the order the custom list
>> determined and when it first posted the final list, that list
>> contained the speed of each with the fastest on top.
>
> Only the 'fastest' nearest by the process of the mechanism of the list
> compilation, not the 'fastest' by any parameter of the actual
> benchmarking, which hasn't yet been done at the time of the creation
> of the list. Closer to a ping than a DNS benchmark assessment.

His quote is very clear.
>
>> So if 1/4 a second is important to you keep trying to make it say
>> what you want it to say and not what I quoted.
>
> It is beginning to appear to me that you interpreted something rashly
> or prematurely. The order of the identified network closest DNS
> servers is not the benchmark which the test was designed to do but
> simply a process by which the tool can select 50 servers out of 5000
> to test comprehensively instead of the 70 servers which are not
> selected based on their network proximity (relating to speed) to the
> specific user.
>
> Somehow you aren't getting it.

LOL...translation...I don't agree with you. You keep avoiding the direct
points.


Dustin and Pooh said the closest server is the fastest...electricity

I said no...other factors are at play.

The custom list compliles the fastest of 5000, listing the fastest
first. Directly quoted from the process description and witnessed as
fact by me that is how it was returned...and that is what it actually
did and it listed the speed of each. Pretty hard to get around that.

That's pretty much it. Now what?

Zak Hipp

unread,
Apr 14, 2012, 10:33:50 PM4/14/12
to
The 'electricity' mentioned is said as if the recipient has a grounding in electronics and transmission line theory. It
may have been stated in context more accurately as propagation delay.

You'll notice that a standard Cat5e network cable has a maximum recommended length that should be used between two
nodes. It is not primarily the resistance of the wire that reduces the voltage to an unusable level although capacitance
losses is of increasing significance as frequencies increase. A digital signal (essentially a square wave that consists
of a fundamental sinusoidal wave plus all the odd harmonics of it) when propagated (a transmission line) the many
sinusoidal waves travel at different speeds distorting the original square wave to the point where the receiver can no
longer re-construct the original signal for its own use. there are various techniques that extends the length, hence
twisted pair, reducing the number of crystals in the metal that the electrons have to cross that cause distortions and
so on, eventually a practical limit is easily reached with metal. Thus the need for reshaping the signal, that takes
time, thus "further away" the two points are from each other the more reshaping points are needed between them.

The internet is full of switches, routers, servers, NICs, buffers, dropped packets etc. all introducing their own delays
that increases the more equipment that is placed between the two points. There is a tendency that more delaying things
exist the further away the two points are.

Naturally speed of servers, sever load, type of transmission medium used, packet routing routes, time of day, etc. all
play their part in the speed of the service, however, all things being equal, the closer the two end points are (in
internet terms) the faster the round trip will be.


P.S.

I have no intention in participating further in this thread and only throwing in my 2p worth, though incomplete by far,
to what seems a genuine quest for some understanding of the position others have taken.


Zak Hipp

Bear

unread,
Apr 14, 2012, 10:34:35 PM4/14/12
to
Mark Warner <mhwarner.i...@gmail.com> wrote in news:9uuppnFi63U1
@mid.individual.net:

> Mike Easter wrote:
>>
>> If one is going to argue about the fine points of DNS benchmarking, I
>> think it is very important to thoroughly digest what all Steve says
>> about his tool's purposes and interpretations.
>
> That's just cruel.
>

I think it's funny....I mean how can you argue with the direct quote I
posted from the the process. Yep...very funny indeed.

p-0^0-h the cat

unread,
Apr 14, 2012, 10:38:55 PM4/14/12
to
On 14 Apr 2012 21:29:01 GMT, Bear <bearbo...@gmai.com> wrote:

>The custom list scans about 5000 nameservers and returns the 50 fastest,
>not closest. The list returns include the speed of each.

>The list I posted of the 50 nameservers chosen was the results of that
>scan sorted by fastest first. That list is also placed as an ini file in
>the programs folder to be used in future benchmark testing...which
>produces the "blue line" Dustin was speaking about.


Here's my custom ini file. Show me the speed field. Show me Symantec's servers... Ha Ha.
Wayne Kerr.

192.168.1.1 pooh.local
62.3.32.17 ns2.alharbitelecom.com
62.69.62.6 ns.murphx.net
62.69.62.7 ns1.murphx.net
62.204.64.101 ns1.flatbox-facilities.net
62.241.163.201 resolver4.systems.pipex.net
82.201.33.5 ns2.is.nl
83.146.21.5 cht-dns.dslgb.com
83.146.21.6 cht-dns.dslgb.com
87.117.198.200 ns1.externalresolver.rapidswitch.com
87.117.237.100 ns2.externalresolver.rapidswitch.com
94.75.228.29 privacybox.de
129.250.35.250 x.ns.gin.ntt.net
154.32.105.18 res1.dns.uk.psi.net
154.32.107.18 res2.dns.uk.psi.net
154.32.109.18 res3.dns.uk.psi.net
156.154.70.1 rdns1.ultradns.net
158.43.128.1 cache0002.ns.eu.uu.net
158.43.128.72 cache0000.ns.eu.uu.net
158.43.192.1 cache0003.ns.eu.uu.net
158.43.240.3 cache0005a.ns.eu.uu.net
158.43.240.4 cache0004.ns.eu.uu.net
193.203.80.90 ns1.sohonet.co.uk
194.72.9.34 indnsc30.ukcore.bt.net
194.72.9.38 indnsc31.bt.net
194.74.65.68 indnsc40.ukcore.bt.net
194.74.65.69 ns7.bt.net
195.12.4.247 res01.opal-solutions.com
195.74.128.6 res03.opal-solutions.com
195.129.12.83 cache0206.ns.eu.uu.net
195.224.180.238 newtoro.pncl.co.uk
199.2.252.10 ns2.sprintlink.net
208.67.220.123 resolver2-fs.opendns.com
208.67.220.220 resolver2.opendns.com
208.67.222.123 resolver1-fs.opendns.com
208.67.222.220 resolver3.opendns.com
212.74.114.129 mk-cache-3.ns.uk.tiscali.com
212.118.241.1 ns1.lon.pnap.net
212.118.241.33 ns2.lon.pnap.net
212.139.132.6 th-cache-2.ns.uk.tiscali.com
212.165.130.9 smtp2.intersatafrica.com
213.52.192.198 ··· no official Internet DNS name ···
213.133.33.2 ns1.is.nl
213.208.106.212 lon1-dns1.nildram.net
213.208.106.213 lon1-dns2.nildram.net
213.253.136.17 dns-cache1.lon.as5587.net
213.253.137.17 dns-cache2.lon.as5587.net
217.14.128.50 ns1.domainmaster.co.uk
217.27.240.20 nscache0.pobox.co.uk
217.72.162.2 dns0.hotchilli.net
217.72.168.34 vs1.intouchsystems.co.uk

Bear

unread,
Apr 14, 2012, 10:46:38 PM4/14/12
to
Zak Hipp <Z...@invalid.invalid> wrote in
news:ysqir.114891$lq1.1...@fx18.am4:
They were talking about the difference between 25 miles and 5 miles...I
mean you know the speed of electricity...that's almost immeasurable.

The biggest factors are server loads and server efficiency and even with
that said, unless you have a piss poor ISP service, the speed difference
is relatively negligible. All of the benchmark tests I made returned
milliseconds with speed differences of less than 1/4 second. It's silly.

I did leave Cox a number of years ago because they loaded their nodes up
too heavily in my area and peak time speeds dropped to unsatisfactory.

They have fixed that issue and I haven't had any problems since.

The topic of the original discussion was not about speed however, even
though it was advertised they are faster as a side note...it was about
DNS filtering.

p-0^0-h the cat

unread,
Apr 14, 2012, 10:57:53 PM4/14/12
to
On 15 Apr 2012 02:46:38 GMT, Bear <bearbo...@gmai.com> wrote:

>They were talking about the difference between 25 miles and 5 miles...I
>mean you know the speed of electricity...that's almost immeasurable.
>
>The biggest factors are server loads and server efficiency and even with
>that said, unless you have a piss poor ISP service, the speed difference
>is relatively negligible.

Then why claim Symantec was faster?

> All of the benchmark tests I made returned
>milliseconds with speed differences of less than 1/4 second. It's silly.

Why act silly then. I told you your claim that Symantec was faster was incorrect. I was
right. Deal with it.

<snip>

>The topic of the original discussion was not about speed however

The OP in this thread and subject is DNS Benchmark. You can duck and dodge all you like,
shit is coming your way from all angles on this one.

Symantec's servers are not universally faster. Not possible, and the validity argument is
nonsense as well. Feel free to question that statement as well. I'd pad your arse prior
though.

>, even
>though it was advertised they are faster as a side note...it was about
>DNS filtering.

--

Mike Easter

unread,
Apr 14, 2012, 11:56:45 PM4/14/12
to
p-0^0-h the cat wrote:
> Bear wrote:
>
>> I suppose you need to read what it says about compiling the custom
>> list again. It's a direct quote. The ini file is in the order the
>> custom list determined and when it first posted the final list,
>> that list contained the speed of each with the fastest on top.

During the dynamic compilation of the 'instantaneous' query (not the
actual benchmark) of each of 5000ish servers, the servers are ordered as
nearest/fastest at the top so as to enable the tool to select the 50 of
the 5000 which are nearest/fastest to the user in anticipation of using
those 50 to create the dnsbench.ini to benchmark.

However, when the dnsbench.ini file is created for those 50, that is not
the order of the ini. See below the screenshot link.

> http://bearware.info/screenshots/DNSBenchmark000.png

That .png shows the same order structure as a/my saved .ini file, *not*
the nearest/fastest at the top. The top two Symantec IPs which have the
little boxes around their IP value are the two IPs which are your
system's DNS and the rest of the IPs are arranged in increasing value of
their IP descending.

The Bear was earlier asking why the Symantec servers were at the top if
they weren't the closest geographically and the answer is because that
is where the tool puts the user's DNS servers and delineates them with a
box/rectangle around their IPs.


--
Mike Easter

Bear

unread,
Apr 15, 2012, 1:01:07 AM4/15/12
to
Mike Easter <Mi...@ster.invalid> wrote in
news:9uuv7q...@mid.individual.net:

> During the dynamic compilation of the 'instantaneous' query (not the
> actual benchmark) of each of 5000ish servers, the servers are ordered
> as nearest/fastest at the top so as to enable the tool to select the
> 50 of the 5000 which are nearest/fastest to the user in anticipation
> of using those 50 to create the dnsbench.ini to benchmark.
>

Direct quote:
"While this is underway, every qualifying resolver is dynamically
"ranked" from fastest to slowest."

"Once the ranking scan is completed, the IP addresses of those 50
fastest qualifying resolvers will be loaded into the Benchmark so that
you can immediately perform a comprehensive analysis of their
performance and the IPs will also be written to a file named
"DNSBENCH.INI", located in the same directory as the Benchmark. This
"DNSBENCH.INI" file will then automatically be loaded whenever the
Benchmark is run in the **future**."

It says nothing about NEAREST. It is alse clear the rankings are ordered
and final and future benchmarks can be performed at a /later date/.

We are done here Mike.

I'm not interested in playing your word games. The quotes are concise.

p-0^0-h the cat

unread,
Apr 15, 2012, 6:00:55 AM4/15/12
to
On 15 Apr 2012 05:01:07 GMT, Bear <bearbo...@gmai.com> wrote:

>Direct quote:
>"While this is underway, every qualifying resolver is dynamically
>"ranked" from fastest to slowest."
>
>"Once the ranking scan is completed, the IP addresses of those 50
>fastest qualifying resolvers will be loaded into the Benchmark so that
>you can immediately perform a comprehensive analysis of their
>performance and the IPs will also be written to a file named
>"DNSBENCH.INI", located in the same directory as the Benchmark. This
>"DNSBENCH.INI" file will then automatically be loaded whenever the
>Benchmark is run in the **future**."
>
>It says nothing about NEAREST. It is alse clear the rankings are ordered
>and final and future benchmarks can be performed at a /later date/.
>
>We are done here Mike.
>
>I'm not interested in playing your word games. The quotes are concise.

The quotes are concise. Your interpretation of them is incorrect. It does not say in the
quotes that they are /displayed/ from fastest to slowest, it says /ranked/. That ranking
is performed internally to the application. They are not displayed fastest to slowest once
the ranking is completed.

Experimental evidence will verify that I am correct. Try it. It's repeatable. Therefore,
it's subject to peer review. See how scientific method blows bearshit out of the window?

That's a fact you stupid, ignorant, ego centric, time wasting, enemy of educational
endeavour, rational thought, freeware, the weak, the have nots, and all the rest of the
fucked up, sick shit you spew onto this newsgroup.

p-0^0-h the cat

unread,
Apr 15, 2012, 6:04:34 AM4/15/12
to
On Sat, 14 Apr 2012 20:56:45 -0700, Mike Easter <Mi...@ster.invalid> wrote:

>p-0^0-h the cat wrote:
>> Bear wrote:
>>
>>> I suppose you need to read what it says about compiling the custom
>>> list again. It's a direct quote. The ini file is in the order the
>>> custom list determined and when it first posted the final list,
>>> that list contained the speed of each with the fastest on top.
>
>During the dynamic compilation of the 'instantaneous' query (not the
>actual benchmark) of each of 5000ish servers, the servers are ordered as
>nearest/fastest at the top so as to enable the tool to select the 50 of
>the 5000 which are nearest/fastest to the user in anticipation of using
>those 50 to create the dnsbench.ini to benchmark.

Maybe you are running a different version to mine, but mine doesn't list by the fastest
while it's creating the custom list, on any tab, while it's running or after it has
finished. Once it finished the list it is as you say, system DNS server/s at the top, the
50 sorted numerically by IP. That's all. It doesn't list by nearest/fastest at all, and
only by fastest after the benchmark is run if the 'Sort Fastest First' checkbox is ticked.

>However, when the dnsbench.ini file is created for those 50, that is not
>the order of the ini. See below the screenshot link.
>
>> http://bearware.info/screenshots/DNSBenchmark000.png
>
>That .png shows the same order structure as a/my saved .ini file, *not*
>the nearest/fastest at the top. The top two Symantec IPs which have the
>little boxes around their IP value are the two IPs which are your
>system's DNS and the rest of the IPs are arranged in increasing value of
>their IP descending.

Yes, that is correct. That was how mine looked after running the select custom list,
system DNS server at the top, the rest sorted numerically by IP.

>The Bear was earlier asking why the Symantec servers were at the top if
>they weren't the closest geographically and the answer is because that
>is where the tool puts the user's DNS servers and delineates them with a
>box/rectangle around their IPs.

Yes, that is correct.

Mike Easter

unread,
Apr 15, 2012, 8:08:57 AM4/15/12
to
p-0^0-h the cat wrote:
> Mike Easter

>> During the dynamic compilation of the 'instantaneous' query (not
>> the actual benchmark) of each of 5000ish servers, the servers are
>> ordered as nearest/fastest at the top so as to enable the tool to
>> select the 50 of the 5000 which are nearest/fastest to the user in
>> anticipation of using those 50 to create the dnsbench.ini to
>> benchmark.

That statement of mine is incorrect. During the instantaneous queries
there is no ordering.

> Maybe you are running a different version to mine, but mine doesn't
> list by the fastest while it's creating the custom list, on any tab,
> while it's running or after it has finished.

Yes.



--
Mike Easter

Mike Easter

unread,
Apr 15, 2012, 8:34:40 AM4/15/12
to
Bear wrote:

> Direct quote:
> "While this is underway, every qualifying resolver is dynamically
> "ranked" from fastest to slowest."

This ranking of fastest (which he later calls closest to the user in the
quote I provided earlier) has a purpose for sending back to Gibson's
site the top 200.

> "Once the ranking scan is completed, the IP addresses of those 50
> fastest qualifying resolvers will be loaded into the Benchmark so that
> you can immediately perform a comprehensive analysis of their
> performance and the IPs will also be written to a file named
> "DNSBENCH.INI", located in the same directory as the Benchmark. This
> "DNSBENCH.INI" file will then automatically be loaded whenever the
> Benchmark is run in the **future**."
>
> It says nothing about NEAREST.

I have already provided you the quoted material where he calls the
fastest 50 the closest 50 to the user. That was pasted from the
annunciation of the tool. He calls fastest closest in that context as
described.

> It is alse clear the rankings are ordered and final and future
> benchmarks can be performed at a /later date/.

The order of the dnsbench.ini is *not* ordered by fastest (or
nearest/closest). It is ordered by -1- your DNS at the top -2- the
fastest (he calls *closest* elsewhere I cited) 50 ordered by IP, not
ranked by closest/fastest.

In addition to the dnsbench.ini created and ordered as described, the
tool also sends back to Gibson's site the top 200 presumably ordered by
speed because he refers to it as a 'sorted' list. This sending can be
suppressed by the command line instructions provided at his site.

" ... upon completing the scanning of the master list, the DNS Benchmark
returns to GRC a sorted list of the top 200 resolvers found by the
Benchmark during that scan." http://www.grc.com/dns/benchmark-faq.htm

My /assumption would be that the sorting of the top 200 would be by
fastest first. Perhaps that list might also have some values included.


--
Mike Easter

Bear

unread,
Apr 15, 2012, 9:34:46 AM4/15/12
to
p-0^0-h the cat <super...@justpurrfect.invalid> wrote in
news:cm9ko75tpasgc9fhf...@4ax.com:

> On 15 Apr 2012 01:00:24 GMT, Bear <bearbo...@gmai.com> wrote:
>
>>I was laughing at Pooh and Dustin
>>saying electricity determined the fasted...meaning the closest would be
>>the fastest.
>
> Then cite the MID where I ever said that.
>
> I said this, and I stand by this statement.
>
> Message-ID: <bv5rn7lqn85qoe3n3...@4ax.com>
>
> "The speed of return of a DNS query depends upon
>
> Speed of network between client and server
>
> Load on the server at the time of query
>
> Whether the server has the record cached"
>
> Try standing by your statements Bottom.
>
> Cite one MID that backs your lies Bottom.
>
> Come on Bottom.
>
> Cite, you time wasting, low rent, imbecile.

I don't have to support my statements with a MID because they are backed
by truth.

--
Bear
http://bearware.info
http://twitter.com/#!/Bear__Bottoms
http://tinychat.com/bearbottoms

Mike Easter

unread,
Apr 15, 2012, 8:53:31 AM4/15/12
to
Bear wrote:

> It says nothing about NEAREST.

The term Gibson uses is closest.

Here is how you can see those words.

Start the .exe with or without a dnsbench.ini in the same folder. If
you already have a dnsbench.ini in the folder the initial behavior will
be different but you can get to the annunciation information either way
because you can ask the tool to create a new dnsbench.ini whether youu
already have one or not.

Click the Nameservers tab to get to the Add/remove button or
alternatively use the System menu function to get to the Rebuild custom
list function.

The system menu can be accessed in one of two ways; either by left
clicking the Gibson/grc icon in the upper left of the title bar or by
right clicking the title bar anywhere. That system menu also has a
rebuild custom list function.

When this function activation is begun the annunciation window will
contain what I have pasted below using the annunciation windows copy
function.


<Annunciation window>

Building a Custom Resolver List
(This will require approximately 37 minutes.)

It only needs to be done once for you to obtain
the full benefits from this Benchmark

As you have seen, GRC's DNS Benchmark contains a built-in list of
well-known public DNS resolvers. But since a DNS resolver's performance
is largely determined by its distance from its user, no preset list can
be optimum for everyone. A resolver that is fast for someone in London
will be slow for someone in New York or Bangkok.

The Benchmark resolves this through its ability to create a customized
list of the closest 50 DNS resolvers for every user. By quickly scanning
a global list of 4,849 DNS resolvers, a file named "DNSBENCH.INI",
containing the IP addresses of the "closest 50" resolvers will be
created in the same directory as the Benchmark's executable file. This
file will supply the IP addresses to be tested during subsequent DNS
benchmarking. Though this process will require approximately 37 minutes,
it only needs to be done once, though you may rebuild this custom list
any time you wish.

This "list building" process can run unattended in the background while
you do other things. But, as when running the benchmark, heavy network
usage should be avoided, if possible, during the process.

An important privacy-related note: When the global resolvers are
scanned, their IP addresses are obtained from GRC's servers on-the-fly
so that the list is always current. After the resolver ranking is
completed, a sorted list of the "fastest 200" resolver IPs are
anonymously returned to GRC for use in updating our online global
resolver statistics database:

http://www.GRC.com/dns/resolvers.csv
The connection to GRC is secured using SSL/TLS which cannot be snooped,
intercepted, or tampered with, and no record of any kind is retained of
your connection IP address - or any other user data. The "fastest 200"
data is used to maintain the "resolvers.csv" database, which anyone is
welcome to examine at any time (click the link above). This database
will eventually allow us to permanently eliminate all of those resolvers
that have never even made it into the "Top 75"... thus dramatically
speeding up all future global resolver list building.

Note: If you do not wish to have this Benchmark return the "fastest 200"
list to GRC, or if your (Linux/WINE) system does not support SSL
connections, you may use the /nosend command line option to suppress
any reporting of the Benchmark's findings to GRC.

To proceed, press the Build Custom List button above.

This will require approximately 37 minutes to complete, during which,
where possible, minimizing other network traffic, would be best.

- Steve Gibson

Please Note: This program is Copyright (c) 2010 by Gibson Research
Corporation -- ALL RIGHTS RESERVED. This program is FREEWARE. Although
it may not be altered in any way, it MAY BE FREELY COPIED AND
DISTRIBUTED onto and through any and all computer media in ANY form or
fashion. You are hereby granted the right to do so.
• • •

</Annunciation window>


--
Mike Easter

Bear

unread,
Apr 15, 2012, 9:55:51 AM4/15/12
to
Mike Easter <Mi...@ster.invalid> wrote in news:9uvtj1F5f2U1
@mid.individual.net:
Oh Mike. You're trying to pretend you're dumb but I know you're smarter than
that and understand the true facts.
Message has been deleted

Mike Easter

unread,
Apr 15, 2012, 9:31:51 AM4/15/12
to
Bear wrote:
> Mike Easter

>> My /assumption would be that the sorting of the top 200 would be by
>> fastest first. Perhaps that list might also have some values included.
>
> Oh Mike. You're trying to pretend you're dumb but I know you're smarter than
> that and understand the true facts.

Presumably one could 'intercept' the file which is sent back to Gibson's
site with the 200 fastest for that user and see its structure. Perhaps
it is a .csv file with a number of fields which could be sorted.

I have seen the resolvers.csv file from Gibson's site created from the
'amassing' of such 200 fastest and how many fields it has. Naturally it
can be ordered by IP or rank or whatever.

There is no way that the user's dnsbench.ini as saved could be sorted by
fastest because that .ini does not contain any ranking data in its saved
condition, only the IP address and the rDNS or if there isn't an rDNS,
'no official internet DNS name'


--
Mike Easter

Dustin

unread,
Apr 15, 2012, 9:34:21 AM4/15/12
to
Bear <bearbo...@gmai.com> wrote in
news:XnsA035CB84A1E76be...@130.225.254.104:

> Mike Easter <Mi...@ster.invalid> wrote in news:9uukl4FlqgU1
> @mid.individual.net:
>
>> If one is going to argue about the fine points of DNS benchmarking,
>> I think it is very important to thoroughly digest what all Steve
>> says about his tool's purposes and interpretations.
>
> I wasn't arguing about DNS Benchmarking. I was laughing at Pooh and
> Dustin saying electricity determined the fasted...meaning the closest
> would be the fastest.

You enjoy overgeneralizing what someone told you.. eh?

If you want to be really nitpicky, electricity is the decider at the end
of the day. Our packets, ironically, do use that as their means of travel.



--
Character is doing the right thing when nobody's looking. There are too
many people who think that the only thing that's right is to get by, and
the only thing that's wrong is to get caught. - J.C. Watts

Dustin

unread,
Apr 15, 2012, 9:36:54 AM4/15/12
to
Zak Hipp <Z...@invalid.invalid> wrote in
news:ysqir.114891$lq1.1...@fx18.am4:

Thanks for the well laid out Post, Zak.

Some of us do understand it, others won't.

Dustin

unread,
Apr 15, 2012, 9:41:14 AM4/15/12
to
Dustin <bughunte...@gmail.com> wrote in
news:XnsA03661895332HHI2948AJD832@no:

> Bear <bearbo...@gmai.com> wrote in
> news:XnsA035CB84A1E76be...@130.225.254.104:
>
>> Mike Easter <Mi...@ster.invalid> wrote in news:9uukl4FlqgU1
>> @mid.individual.net:
>>
>>> If one is going to argue about the fine points of DNS benchmarking,
>>> I think it is very important to thoroughly digest what all Steve
>>> says about his tool's purposes and interpretations.
>>
>> I wasn't arguing about DNS Benchmarking. I was laughing at Pooh and
>> Dustin saying electricity determined the fasted...meaning the
>> closest would be the fastest.
>
> You enjoy overgeneralizing what someone told you.. eh?
>
> If you want to be really nitpicky, electricity is the decider at the
> end of the day. Our packets, ironically, do use that as their means
> of travel.

Sadly, I have to followup my own post...

electricity is one deciding factor. Others, routing, server load, etc
all contribute. Or, in other words, a DNS server isn't going to be
faster just because such and such company runs it. The original claim
was that symantec is faster than your own ISPs. That's going to vary
depending on a persons local for a number! of reasons. Still tho,
electricity does play a rather major part in it. Just not the sole
decider.

Mike Easter

unread,
Apr 15, 2012, 10:18:20 AM4/15/12
to
Bear wrote:
> Mike Easter

>> I have already provided you the quoted material where he calls the
>> fastest 50 the closest 50 to the user.

> Oh Mike. You're trying to pretend you're dumb but I know you're
> smarter than that and understand the true facts.

Here are some more true facts:

In this post...

Subject: Re: DNS PROXY
From: Bear
Message-ID: <XnsA02F639AD6FB4be...@130.225.254.104>
Date: 08 Apr 2012 14:47:30 GMT



....you said:

> Damn, Cox's servers are closer than Symantec's to my location yet
> the DNSBenchmark shows Symantec's DNS servers to be faster. In fact,
> it is fastest of the performance of 4,849 resolvers.
>
> http://bearware.info/screenshots/DNSBenchmark000.png


But that isn't what your graphic shows.

Your graphic shows a screenshot of the dnsbench tool's display with the
2 Symantec servers at the top and outlined and in *orange* with orange
solid icons followed by other servers in IP order.

That graphic indicates that the Symantec servers are your system DNS,
not fastest and it also indicates that the Symantec servers are
suboptimal choices from a particular perspective according to Gibson
because your Symantec servers do not (properly) return an error for
non-existent names.

Here is pasting from Gibson's page on what your posted graphic means.

<SG>
Regardless of its color, a filled-in dot indicates that the server is
currently being used by the system...

... a filled-in dot — meaning that the nameserver at this IP is
currently configured for use. The text is also bold and the entire line
has a black outline...

Orange colored servers may be somewhat less desirable to use depending
upon your feelings about the handling of typos and nonexistent domain
names: The Benchmark colors a nameserver orange when it does not return
an error in response to a query for a non-existent domain name. DNS
nameservers are supposed to simply return a “Not Found” error to
indicate that the requested domain name does not exist. But ISPs and
third-party DNS service providers are adopting a new “revenue-enhancing”
trick: Instead of returning an error, they redirect the user's browser
to their own marketing-related search page. This gives them a way of
being “helpful” and of generating some additional marketing and
advertising revenue from your typos or bad links — by causing you to
confront a page you didn't ask for and probably don't want.

Many people (especially Internet purists) find this sort of thing quite
annoying, so the Benchmark tests for it so that you will be informed.
The good news is that people have been annoyed enough to induce most
ISPs and providers who do this to offer the option of turning off this
redirection. If your ISP, or a DNS provider you are using is doing this,
you might wish to explore how to turn off the DNS redirection. Once that
is done, you can quickly use this Benchmark to verify that your system's
DNS nameservers are all in the green and are neither red nor orange.
</SG> http://www.grc.com/dns/operation.htm



--
Mike Easter

Bear

unread,
Apr 15, 2012, 11:14:02 AM4/15/12
to
Mike Easter <Mi...@ster.invalid> wrote in news:9uvtj1F5f2U1
@mid.individual.net:

> My /assumption would be that the sorting of the top 200 would be by
> fastest first. Perhaps that list might also have some values included.

I'm not assuming. I performed the test. It ranked them fastest to slowest
and included the speeds...if you or Dustin or Pooh would have performed the
test as I did, you too would have witnessed it. I don't care what it said
elsewhere, I care about /that/ test and what it said there...as it is the
test of contention. Try as you might to change it (why? what is your
motive?)...the direct quotes from /that/ test, confirmed by my witness is
as stated.

I see no further reason to argue with you about this. You are not being
honest.

Bear

unread,
Apr 15, 2012, 11:16:36 AM4/15/12
to
Mike Easter <Mi...@ster.invalid> wrote in news:9v00u8Fu9sU1
@mid.individual.net:

> There is no way that the user's dnsbench.ini as saved could be sorted by
> fastest because that .ini does not contain any ranking data in its saved
> condition, only the IP address and the rDNS or if there isn't an rDNS,
> 'no official internet DNS name'

that is because the ini file is created from the test for /future/
benchmark tests /AS QUOTED/. My recommendation to you would be: PERFROM THE
TEST AND SEE FOR YOUR SELF.

Bear

unread,
Apr 15, 2012, 11:21:28 AM4/15/12
to
Dustin <bughunte...@gmail.com> wrote in
news:XnsA03662B4A5A2CHHI2948AJD832@no:

> Dustin <bughunte...@gmail.com> wrote in
> news:XnsA03661895332HHI2948AJD832@no:
>
>> Bear <bearbo...@gmai.com> wrote in
>> news:XnsA035CB84A1E76be...@130.225.254.104:
>>
>>> Mike Easter <Mi...@ster.invalid> wrote in news:9uukl4FlqgU1
>>> @mid.individual.net:
>>>
>>>> If one is going to argue about the fine points of DNS benchmarking,
>>>> I think it is very important to thoroughly digest what all Steve
>>>> says about his tool's purposes and interpretations.
>>>
>>> I wasn't arguing about DNS Benchmarking. I was laughing at Pooh and
>>> Dustin saying electricity determined the fasted...meaning the
>>> closest would be the fastest.
>>
>> You enjoy overgeneralizing what someone told you.. eh?
>>
>> If you want to be really nitpicky, electricity is the decider at the
>> end of the day. Our packets, ironically, do use that as their means
>> of travel.
>
> Sadly, I have to followup my own post...
>
> electricity is one deciding factor. Others, routing, server load, etc
> all contribute. Or, in other words, a DNS server isn't going to be
> faster just because such and such company runs it. The original claim
> was that symantec is faster than your own ISPs. That's going to vary
> depending on a persons local for a number! of reasons. Still tho,
> electricity does play a rather major part in it. Just not the sole
> decider.
>
>

I guess that is about the best apology you can give.

Dustin

unread,
Apr 15, 2012, 11:50:04 AM4/15/12
to
Bear <bearbo...@gmai.com> wrote in
news:XnsA036681AD9498be...@130.225.254.104:

> I'm not assuming. I performed the test. It ranked them fastest to
> slowest and included the speeds...if you or Dustin or Pooh would have
> performed the test as I did, you too would have witnessed it. I don't
> care what it said elsewhere, I care about /that/ test and what it
> said there...as it is the test of contention. Try as you might to
> change it (why? what is your motive?)...the direct quotes from /that/
> test, confirmed by my witness is as stated.

Which test? You admitted you loaded the custom list, but never ran the
benchmark. The screencapture you provided confirms that.

> I see no further reason to argue with you about this. You are not
> being honest.

Well Bear, truth be told, you haven't been honest from the getgo. You've
stomped your feet and insulted those who have.

Dustin

unread,
Apr 15, 2012, 11:51:35 AM4/15/12
to
Bear <bearbo...@gmai.com> wrote in
news:XnsA036695D6DA8Fbe...@130.225.254.104:
I provided no apology, as I wasn't wrong. I understood how to use the
program. I didn't provide any screencaptures of the list being loaded
but the benchmark not being run. You did.

You apologized for being wrong about it and then tried to pass blame
onto myself and Pooh. We weren't wrong tho, you were.

Bear

unread,
Apr 15, 2012, 12:38:35 PM4/15/12
to
Dustin <bughunte...@gmail.com> wrote in
news:XnsA036788BF8B17HHI2948AJD832@no:

> Which test? You admitted you loaded the custom list, but never ran the
> benchmark. The screencapture you provided confirms that.
>
That /is/ the benchmark test. You should read the direct quotes from that
process...or hey a novel idea...perform the test yourself and see for
yourself, though I do not expect an honest review of that.

The only thing I apologized for was not running the benchmark test again to
demonstrate the blue line you kept talking about. I acknowledged that and
apologized if I mislead anyone into thinking I had.

It wasn't necessary to run another test however, as if you read the direct
quoted text I posted from that process compilation of the custom list, you
will see that process does in fact perform the benchmark test, rank the top
50 of 5000 by fastest first, show the speed of each, and then place the top
50 ordered fastest first in an ini file for *future* tests. I witnessed it.
You didn't by self admission...end of story.

You and Pooh were wrong Dustin. By not being honest and continuing to
squirm, twist, and crawfish, you are just digging a deeper hole for
yourself Mr. EXPERT. I know a charlatan when I see one...and you most
definitely are one.

Bear

unread,
Apr 15, 2012, 12:47:15 PM4/15/12
to
Dustin <bughunte...@gmail.com> wrote in
news:XnsA03678CE4886HHI2948AJD832@no:

> I provided no apology, as I wasn't wrong. I understood how to use the
> program. I didn't provide any screencaptures of the list being loaded
> but the benchmark not being run. You did.

Nope. You were wrong. You do not understand how to use the program. I
advise you to perform custom compilation (which you admit you haven't done)
and read the information contained (which I posted as a direct quote) and
then report back with your apology. :)

>
> You apologized for being wrong about it and then tried to pass blame
> onto myself and Pooh. We weren't wrong tho, you were.

My apology for the blue line issue in case anyone was mislead by my post in
that manner. However, the blue line issue is not required as the custom
list compilation /does/ include benchmarking which you were wrong about and
which I proved from the posting of direct quotes from that process.

After digging into it deeper, I was right all along and the apology wasn't
necessary and is withdrawn.

Dustin

unread,
Apr 15, 2012, 12:51:28 PM4/15/12
to
Bear <bearbo...@gmai.com> wrote in
news:XnsA03676702B9F6be...@130.225.254.104:

> Dustin <bughunte...@gmail.com> wrote in
> news:XnsA036788BF8B17HHI2948AJD832@no:
>
>> Which test? You admitted you loaded the custom list, but never ran
>> the benchmark. The screencapture you provided confirms that.
>>
> That /is/ the benchmark test. You should read the direct quotes from
> that process...or hey a novel idea...perform the test yourself and
> see for yourself, though I do not expect an honest review of that.

Again, you *never* ran the damn test. You loaded a list and posted that.
No test! Mike has recently explained in overly simplistic detail how to
use the app, and you question him too.

I already did the test, Bear. Already posted screenshots of it.

Original benchmark, using default list:
http://bughunter.it-mate.co.uk/dnsbenchmark.jpg

Followups (First one, allowed DNSBenchmark to create the custom list.
Here's what it looked like prior to running benchmark:
http://bughunter.it-mate.co.uk/dnscustom1.png

Here's the list AFTER running benchmark:
http://bughunter.it-mate.co.uk/dnscustom2.png

> The only thing I apologized for was not running the benchmark test
> again to demonstrate the blue line you kept talking about. I
> acknowledged that and apologized if I mislead anyone into thinking I
> had.

It's more than a blue line. See the results of dnscustom2? It's sorted
in order from fastest to slowest, for my location. I said all along
throughout this stupid discussion it would vary. You disagreed.

> It wasn't necessary to run another test however, as if you read the
> direct quoted text I posted from that process compilation of the
> custom list, you will see that process does in fact perform the
> benchmark test, rank the top 50 of 5000 by fastest first, show the
> speed of each, and then place the top 50 ordered fastest first in an
> ini file for *future* tests. I witnessed it. You didn't by self
> admission...end of story.

Which is NOT true. The program collects a large list of DNS servers, but
until/unless YOU RUN THE BENCHMARK using that newly compiled list, it's
NOT sorted from fastest/slowest.

> You and Pooh were wrong Dustin. By not being honest and continuing to
> squirm, twist, and crawfish, you are just digging a deeper hole for
> yourself Mr. EXPERT. I know a charlatan when I see one...and you most
> definitely are one.

Bear, You misunderstood how to use the program and misunderstood the
results. Neither of us did that. We both understand why the benchmark
test was required for custom list and default list. We understand how
and why it ranks dns servers as it does. YOU didn't.

I've not twisted, squirmed or backed down not one little bit. It's
obvious to me you have an inferiority complex and don't like persons who
do know more than you about this topic correcting you.

I tell you what tho, if I ever need to know how to fly large amounts of
cocaine into the country, I'll be sure to ask you first. You do have
experience in that field.

Dustin

unread,
Apr 15, 2012, 12:57:15 PM4/15/12
to
Bear <bearbo...@gmai.com> wrote in
news:XnsA03677E8D8D3be...@130.225.254.104:

> Nope. You were wrong. You do not understand how to use the program. I
> advise you to perform custom compilation (which you admit you haven't
> done) and read the information contained (which I posted as a direct
> quote) and then report back with your apology. :)

I'm not going to keep doing this. I provided the urls days ago. The pics
haven't changed.

http://bughunter.it-mate.co.uk/dnscustom1.png
This is the custom list loaded, prior to the benchmark test being run.

http://bughunter.it-mate.co.uk/dnscustom2.png
results of the custom list, after running the benchmark.

Original list that ships with the program, after running benchmark test:
http://bughunter.it-mate.co.uk/dnsbenchmark.jpg

> My apology for the blue line issue in case anyone was mislead by my
> post in that manner. However, the blue line issue is not required as
> the custom list compilation /does/ include benchmarking which you
> were wrong about and which I proved from the posting of direct quotes
> from that process.

According to the Gibson website, the author of the tool we're
discussing, it doesn't. Mike tried (and failed, imo) to enlighten you
about the authors notes regarding using the program too.

> After digging into it deeper, I was right all along and the apology
> wasn't necessary and is withdrawn.

That's alright. I don't follow your advice anyway. I comment for the
benefit of people who might be foolish enough to take your advice.

p-0^0-h the cat

unread,
Apr 15, 2012, 1:06:42 PM4/15/12
to
It's a shame how much energy has been wasted on this fool defending the self evident, when
I'm sure the time would have been better spent discussing this application and related
matters in more detail, or even Symantec's, and similar DNS blocking thingamajigs.

--
p-0^0-h the cat
Internet Terrorist, Mass sock puppeteer, Agent provocateur, Gutter rat

Mike Easter

unread,
Apr 15, 2012, 1:15:03 PM4/15/12
to
Bear wrote:
> Mike Easter

>> My /assumption would be that the sorting of the top 200 would be by
>> fastest first. Perhaps that list might also have some values included.
>
> I'm not assuming. I performed the test.

Your term 'the test' is ambiguous because you are not properly defining
what your 'the test' means. The ambiguity is the 'screening' process vs
the benchmarking process.

The tool's purpose is to *BENCHMARK* DNS servers.

It can benchmark the default servers or it can benchmark a custom list
of servers.

In order to benchmark a custom list of servers, it has to first create
that list of 50 + your own. During the creation of the screening list,
this http://www.grc.com/dns/buildlist.png is the type of graphic you
see, this one is from Gibson's site
http://www.grc.com/dns/custom-list.htm at the conclusion of the
screening test.

> It ranked them fastest to slowest and included the speeds...

'Speeds' is also ambiguous in this context. During the screening
process which I have compared to pinging, the server rankings are based
on an extremely fast/instantaneous process of over 2 servers per second
131/minute (4849 servers/ 37 minutes).

That is not a 'speed' test that is a screening process to arrange the
servers in an order of something they can do in a fraction of a second.

> if you or Dustin or Pooh would have performed the
> test as I did, you too would have witnessed it.

I have performed the test. And I'm performing it again.

What you posted - the graphic link - as 'evidence' of what you did was
not evidence of such a rating as you are describing, so your statement
in that regard has to be mistaken or otherwise untrue.

My reason for performing the test again is to see if there is a
intermediate stage in the process that one could gain access to that
(trivial) screening information, which is *NOT* a benchmark to be
talking about at such length.

You are debating a foolish and trivial intermediate 'event' in the usage
of the tool which is of almost zero importance except for the benefits
of selecting 50 servers in preparation to /actually/ benchmark as well
as selecting 200 servers to refer back to Gibson's site.

> ..the direct quotes from /that/ test, confirmed by my witness is
> as stated.

Confirmed by your ?witness? -- where your witness is the graphic? which
does not confirm what you are saying.

> I see no further reason to argue with you about this. You are not being
> honest.

We are using the illustrations provided by your own graphic and Gibson's
site to demonstrate the incorrectness of what you are saying.

I'm trying to help determine a question you raise which you have
provided no evidence of whatsoever.,



--
Mike Easter

Bear

unread,
Apr 15, 2012, 1:15:29 PM4/15/12
to
Dustin <bughunte...@gmail.com> wrote in
news:XnsA03682F567A70HHI2948AJD832@no:

> Again, you *never* ran the damn test. You loaded a list and posted that.
> No test! Mike has recently explained in overly simplistic detail how to
> use the app, and you question him too.

You have not used the custom list creation process by self admission and do
not understand it. I have and I have posted the entire direct quote about
that process...which /includes/ a benchmark test of 5000 nameservers,
chooses the fastest 50 and lists the speed of each, orders that list
fastest first, and places those nameservers in an ini file for FUTURE
tests.

You and Pooh were wrong about electricity and closest, and you are wrong
about this. Continue on with making more of a fool of yourself.

Regardless of what areas of expertise you have, you have subverted any
confidence one may have in your comments about anything by going outside
the bounds of your expertise, being wrong and failing to admit it, and on
top of that - talking down to everyone and proclaiming you ARE the expert
and they are stupid. No true professional demonstrates such bad behavior
and you undermine yourself.

Bear

unread,
Apr 15, 2012, 2:15:33 PM4/15/12
to
Dustin <bughunte...@gmail.com> wrote in
news:XnsA03682F567A70HHI2948AJD832@no:
LOL...translation...I don't agree with you. You keep avoiding the direct
points.

Just to clarify my understanding of the use of dnsbench, which is at some
variance from what I understand you to be saying. You were wrong. You do not
understand how to use the program.

Dustin and Pooh said the closest server is the fastest...electricity. I said
no...other factors are at play.

Bear

unread,
Apr 15, 2012, 1:18:05 PM4/15/12
to
Dustin <bughunte...@gmail.com> wrote in
news:XnsA03683F063D7DHHI2948AJD832@no:


>
> That's alright. I don't follow your advice anyway. I comment for the
> benefit of people who might be foolish enough to take your advice.
>
>
"This GRC DNS Benchmark is scanning 4,849 global DNS resolvers to
determine whether they are accessible and responsive from your present
location. If so, the Benchmark measures the resolver's minimum response
time, as well as whether it appears to be operating reliably and
correctly.

While this is underway, every qualifying resolver is dynamically
"ranked" from fastest to slowest. The top line of the display above
shows the minimum response time of the fastest resolver found so far, as
well as the minimum response time of the "50th fastest."

Once the ranking scan is completed, the IP addresses of those 50 fastest
qualifying resolvers will be loaded into the Benchmark so that you can
immediately perform a comprehensive analysis of their performance and
the IPs will also be written to a file named "DNSBENCH.INI", located in
the same directory as the Benchmark. This "DNSBENCH.INI" file will then
automatically be loaded whenever the Benchmark is run in the future.

All results obtained during this global resolver ranking process, and
while benchmarking, will be far more accurate if all other network usage
is minimized while the Benchmark is working."

LOL. This makes you the fool Dustin.

Bear

unread,
Apr 15, 2012, 1:20:15 PM4/15/12
to
Mike Easter <Mi...@ster.invalid> wrote in news:9v0e0mF2j1U1
@mid.individual.net:

> What you posted - the graphic link - as 'evidence' of what you did was
> not evidence of such a rating as you are describing, so your statement
> in that regard has to be mistaken or otherwise untrue.

"This GRC DNS Benchmark is scanning 4,849 global DNS resolvers to
determine whether they are accessible and responsive from your present
location. If so, the Benchmark measures the resolver's minimum response
time, as well as whether it appears to be operating reliably and
correctly.

While this is underway, every qualifying resolver is dynamically
"ranked" from fastest to slowest. The top line of the display above
shows the minimum response time of the fastest resolver found so far, as
well as the minimum response time of the "50th fastest."

Once the ranking scan is completed, the IP addresses of those 50 fastest
qualifying resolvers will be loaded into the Benchmark so that you can
immediately perform a comprehensive analysis of their performance and
the IPs will also be written to a file named "DNSBENCH.INI", located in
the same directory as the Benchmark. This "DNSBENCH.INI" file will then
automatically be loaded whenever the Benchmark is run in the future.

All results obtained during this global resolver ranking process, and
while benchmarking, will be far more accurate if all other network usage
is minimized while the Benchmark is working."

Game/Set/Match

p-0^0-h the cat

unread,
Apr 15, 2012, 1:24:41 PM4/15/12
to
On 15 Apr 2012 17:15:29 GMT, Bear <bearbo...@gmai.com> wrote:

>You have not used the custom list creation process by self admission and do
>not understand it. I have and I have posted the entire direct quote about
>that process...which /includes/ a benchmark test of 5000 nameservers,

Wrong. It doesn't "benchmark" the 5000 nameservers.

Dustin

unread,
Apr 15, 2012, 1:30:21 PM4/15/12
to
p-0^0-h the cat <super...@justpurrfect.invalid> wrote in
news:kivlo7pgthu72kca7...@4ax.com:

> It's a shame how much energy has been wasted on this fool defending
> the self evident, when I'm sure the time would have been better spent
> discussing this application and related matters in more detail, or
> even Symantec's, and similar DNS blocking thingamajigs.


I'm surprised I spent this much time on it. Or ran either test or posted
the results.. All for nothing. I already knew the answer.. Doh.

Thousands of technicians, network engineers, etc all do.. Bear's got me on
aircraft.. but, alas, that takes money and I just wasn't born with a
golden spoon in my mouth. :)

p-0^0-h the cat

unread,
Apr 15, 2012, 1:38:08 PM4/15/12
to
On 15 Apr 2012 17:20:15 GMT, Bear <bearbo...@gmai.com> wrote:

>Game/Set/Match

You've been fooling yourself all your life. I guess you're in so deep now that self
realisation is unlikely. You only get one chance. You've wasted yours.

Dustin

unread,
Apr 15, 2012, 1:38:21 PM4/15/12
to
Bear <bearbo...@gmai.com> wrote in
news:XnsA0367CB1CA196be...@130.225.254.104:

> You have not used the custom list creation process by self admission
> and do not understand it. I have and I have posted the entire direct
> quote about that process...which /includes/ a benchmark test of 5000
> nameservers, chooses the fastest 50 and lists the speed of each,
> orders that list fastest first, and places those nameservers in an
> ini file for FUTURE tests.

Imagine a large font.. "HUH?" I did use the custom list! I don't know
what you mean by this self admission nonsense. You asked me to run the
custom list, remember? It took 37 minutes and I'd guess an additional..
10 minutes to run the benchmark. I posted the results, after getting the
custom list and then after running the benchmark. WTF are you smoking
dude?

> You and Pooh were wrong about electricity and closest, and you are
> wrong about this. Continue on with making more of a fool of yourself.

I mentioned electricity first, as another poster pointed out, in the
context it would have been a better example of propogation delay. Pooh
had nothing to do with that, so if you're going to accuse, stick with
me. Ok?

Regardless of the context in which I said it, it's still very much true.
These characters don't arrive on your screen by majic.

> Regardless of what areas of expertise you have, you have subverted
> any confidence one may have in your comments about anything by going
> outside the bounds of your expertise, being wrong and failing to
> admit it, and on top of that - talking down to everyone and

I'm waiting to go outside of my area of expertise. I can keep up just
fine on the technical or electrical aspect. I'm not air force certified,
but.. I suspect I'm just as good an electrician (without actual license)
as you likely are; I already told you my "certification" level. You
won't even tell us if you made it to journeymans rank.

> proclaiming you ARE the expert and they are stupid. No true
> professional demonstrates such bad behavior and you undermine
> yourself.

Ever seen the tv show House? Some experts just don't have tact, doesn't
make them less than an expert. It just means they aren't likely to do
well in politics. I know house isn't a real dr, but.. I've met many who
actually are with his attitude. :)

Dustin

unread,
Apr 15, 2012, 1:42:24 PM4/15/12
to
Bear <bearbo...@gmai.com> wrote in
news:XnsA0367D228D37Ebe...@130.225.254.104:

> "This GRC DNS Benchmark is scanning 4,849 global DNS resolvers to
> determine whether they are accessible and responsive from your
> present location. If so, the Benchmark measures the resolver's
> minimum response time, as well as whether it appears to be operating
> reliably and correctly.

> Once the ranking scan is completed, the IP addresses of those 50
> fastest qualifying resolvers will be loaded into the Benchmark so
> that you can immediately perform a comprehensive analysis of their
> performance and the IPs will also be written to a file named
> "DNSBENCH.INI", located in the same directory as the Benchmark. This
> "DNSBENCH.INI" file will then automatically be loaded whenever the
> Benchmark is run in the future.

See that? ranking scan (means nothing), the 50 fastest of the ranking scan
(totally dependant on your physical location) are kept for the benchmark
test. You do realize, more than one test is happening?


> LOL. This makes you the fool Dustin.

Not exactly. It clearly demonstrates one of us can't process technical
information very well. I'll give you a simple clue. The one who can't seem
to grasp his flawed testing methodology was a pilot. [g]

Dustin

unread,
Apr 15, 2012, 1:45:26 PM4/15/12
to
Bear <bearbo...@gmai.com> wrote in
news:XnsA0367D228D37Ebe...@130.225.254.104:

> "This GRC DNS Benchmark is scanning 4,849 global DNS resolvers to
> determine whether they are accessible and responsive from your
> present location. If so, the Benchmark measures the resolver's
> minimum response time, as well as whether it appears to be operating
> reliably and correctly.

> Once the ranking scan is completed, the IP addresses of those 50
> fastest qualifying resolvers will be loaded into the Benchmark so
> that you can immediately perform a comprehensive analysis of their
> performance and the IPs will also be written to a file named
> "DNSBENCH.INI", located in the same directory as the Benchmark. This
> "DNSBENCH.INI" file will then automatically be loaded whenever the
> Benchmark is run in the future.

See that? ranking scan (means nothing), the 50 fastest of the ranking
scan (totally dependant on your physical location) are kept for the
benchmark test. You do realize, more than one test is happening?

I did. I'd be willing to bet Pooh did too. Mike is likely confirming it.

> LOL. This makes you the fool Dustin.

Not exactly. It clearly demonstrates one of us can't process technical
information very well. IE: the ranking scan isn't the benchmark scan.
The ranking scan sets up the top 50 for the benchmark scan to be done
later. It's not really sorting fastest/slowest until the benchmark (not
ranking) test is done.

Mike Easter

unread,
Apr 15, 2012, 1:47:52 PM4/15/12
to
Bear wrote:
> Dustin

>> Which test? You admitted you loaded the custom list, but never ran the
>> benchmark. The screencapture you provided confirms that.

That is correct.

> That /is/ the benchmark test.

That is not the benchmark test.

> You should read the direct quotes from that
> process...

Here^1 are the direct quotes from that process:

> or hey a novel idea...perform the test yourself and see for
> yourself, though I do not expect an honest review of that.

I have performed the preliminary screening test again. You are never
provided a speed ranking of the milliseconds of the servers which
responded. In my most recent test, the range from fastest to slowest
was a few milliseconds between 14 and 17.

> The only thing I apologized for was not running the benchmark test again

You have never provided any evidence that you performed the benchmark
test at all. Your graphic is consistent with the view after the
screening test and before any benchmark is run.

You apparently mistakenly believe that the 37 minute selection/screening
process of the 50 is a 'benchmark' -- which is why you are so far off base.

> It wasn't necessary to run another test however, as if you read the direct
> quoted text I posted from that process compilation of the custom list, you
> will see that process does in fact perform the benchmark test, rank the top
> 50 of 5000 by fastest first, show the speed of each, and then place the top
> 50 ordered fastest first in an ini file for *future* tests. I witnessed it.
> You didn't by self admission...end of story.

This par above is not true. It is wrong. You are mistaken. end of
story as if that meant anything.

The tool when used with the custom list can create a list of 50 servers
based on screening speed compared to others, but not ever seen in order
of their screening speed even though the tool uses each one's screening
speed for its own internal processes, you never see that ranking.

At the completion of the screening, you can see a list of the servers
chosen and that list goes into the .ini, but you never see the screening
speed results or the screening speed order, because it is immaterial.
The purpose of the screening is to select 50 to benchmark, not to do
anything with the rank of the screening, which is trivial.

Then if you run the benchmark you get all kinds of true benchmarking
results which allow you to evaluate the servers and act on that evaluation.


^1 <SG>
Now ranking the performance of 4,849
resolvers for the creation of your
"Top 50 Resolvers" custom list

You must do this just once to obtain the full benefits from this Benchmark.

This GRC DNS Benchmark is scanning 4,849 global DNS resolvers to
determine whether they are accessible and responsive from your present
location. If so, the Benchmark measures the resolver's minimum response
time, as well as whether it appears to be operating reliably and correctly.

While this is underway, every qualifying resolver is dynamically
"ranked" from fastest to slowest. The top line of the display above
shows the minimum response time of the fastest resolver found so far, as
well as the minimum response time of the "50th fastest."

Once the ranking scan is completed, the IP addresses of those 50 fastest
qualifying resolvers will be loaded into the Benchmark so that you can
immediately perform a comprehensive analysis of their performance and
the IPs will also be written to a file named "DNSBENCH.INI", located in
the same directory as the Benchmark. This "DNSBENCH.INI" file will then
automatically be loaded whenever the Benchmark is run in the future.

Four important notes:

All results obtained during this global resolver ranking process, and
while benchmarking, will be far more accurate if all other network usage
is minimized while the Benchmark is working.

Unlike the Benchmark's built-in default list of safe and well-known
commercial resolvers, this global list contains all of those resolvers,
plus others of unknown origin, ownership, disposition, reliability,
availability and access. We are expressly not vouching for, or
recommending, that you use any of these. So you must use your own
judgement based upon the designated ownership, the resolver's network
name, and anything else that might give you a clue about whether such a
resolver might be wise to use.

In an effort to keep the global resolver ranking speed as high as
possible, only the resolvers' cached lookup performance is being tested.
This means that some manual pruning of this list will likely be
required. After benchmarking the resolvers thoroughly, simply
right-click on any unacceptable resolvers and select "Remove this
nameserver" to remove any that are unacceptable. Remember to save the
updated (pruned) .INI file using either the Add/Remove dialog or the
application's system menu (Alt-Spacebar).

You may repeat this list-building anytime you wish to recreate your
personalized custom resolver IP list. This should be done when this
machine is moved to another location or to another ISP.

- Steve Gibson

Please Note: This program is Copyright (c) 2010 by Gibson Research
Corporation -- ALL RIGHTS RESERVED. This program is FREEWARE. Although
it may not be altered in any way, it MAY BE FREELY COPIED AND
DISTRIBUTED onto and through any and all computer media in ANY form or
fashion. You are hereby granted the right to do so.
• • •


--
Mike Easter

Mike Easter

unread,
Apr 15, 2012, 1:52:33 PM4/15/12
to
Bear wrote:

> You have not used the custom list creation process by self admission and do
> not understand it. I have and I have posted the entire direct quote about
> that process...which /includes/ a benchmark test of 5000 nameservers,

That is not a 'benchmark test'. That is a screening test.

> chooses the fastest 50 and lists the speed of each,

That is untrue. No such list of the screening result is provided.

> orders that list fastest first, and places those nameservers in an
> ini file for FUTURE tests.

That is untrue. The list is ordered your servers first, then the others
in IP number order.

Somehow you never got around to benchmarking your custom list and you
also never got around to evaluating the tabular data or the conclusions.



--
Mike Easter

Bear

unread,
Apr 15, 2012, 1:55:49 PM4/15/12
to
Mike Easter <Mi...@ster.invalid> wrote in
news:9v0fu7...@mid.individual.net:

>>> Which test? You admitted you loaded the custom list, but never ran
>>> the benchmark. The screencapture you provided confirms that.
>
> That is correct.
>
>> That /is/ the benchmark test.
>
> That is not the benchmark test.

"This GRC DNS Benchmark is scanning 4,849 global DNS resolvers to
determine whether they are accessible and responsive from your present
location. If so, the Benchmark measures the resolver's minimum response
time, as well as whether it appears to be operating reliably and
correctly.

While this is underway, every qualifying resolver is dynamically
"ranked" from fastest to slowest. The top line of the display above
shows the minimum response time of the fastest resolver found so far, as
well as the minimum response time of the "50th fastest."

Once the ranking scan is completed, the IP addresses of those 50 fastest
qualifying resolvers will be loaded into the Benchmark so that you can
immediately perform a comprehensive analysis of their performance and
the IPs will also be written to a file named "DNSBENCH.INI", located in
the same directory as the Benchmark. This "DNSBENCH.INI" file will then
automatically be loaded whenever the Benchmark is run in the future.

All results obtained during this global resolver ranking process, and
while benchmarking, will be far more accurate if all other network usage
is minimized while the Benchmark is working."

p-0^0-h the cat

unread,
Apr 15, 2012, 2:00:34 PM4/15/12
to
On Sun, 15 Apr 2012 17:45:26 GMT, Dustin <bughunte...@gmail.com> wrote:

>Not exactly. It clearly demonstrates one of us can't process technical
>information very well. IE: the ranking scan isn't the benchmark scan.
>The ranking scan sets up the top 50 for the benchmark scan to be done
>later. It's not really sorting fastest/slowest until the benchmark (not
>ranking) test is done.

Exactly. The ranking scan is just making a preliminary selection before the more thorough
'benchmark' is performed. A bit like selecting from 5000 job applications the 50 you wish
to interview. How difficult is this.

Mike Easter

unread,
Apr 15, 2012, 2:05:02 PM4/15/12
to
Bear wrote:
> Mike Easter

>> That is not the benchmark test.
>
> "This GRC DNS Benchmark is scanning 4,849 global DNS resolvers to
> determine whether they are accessible and responsive from your present
> location.

You are posting something over and over again which you fail to
comprehend correctly.

What you are posting is an annunciation of a process which the tool
performs which is a screening process, not a/its benchmarking process.

You should run the .exe again and if the/your dnsbench.ini is present in
its folder/directory the tool will load the 50 plus your own.

Then you should click the Run benchmark and wait for the results. Then
you should look at the Tabular data and the Conclusions. Then you
should finally know what the DNS Benchmark actually does instead of
simply screening some servers for their milliseconds to reply to the tool.


--
Mike Easter

Dustin

unread,
Apr 15, 2012, 2:45:13 PM4/15/12
to
p-0^0-h the cat <super...@justpurrfect.invalid> wrote in
news:iv2mo79dm2vv1q02r...@4ax.com:

> On Sun, 15 Apr 2012 17:45:26 GMT, Dustin <bughunte...@gmail.com>
> wrote:
>
>>Not exactly. It clearly demonstrates one of us can't process
>>technical information very well. IE: the ranking scan isn't the
>>benchmark scan. The ranking scan sets up the top 50 for the benchmark
>>scan to be done later. It's not really sorting fastest/slowest until
>>the benchmark (not ranking) test is done.
>
> Exactly. The ranking scan is just making a preliminary selection
> before the more thorough 'benchmark' is performed. A bit like
> selecting from 5000 job applications the 50 you wish to interview.
> How difficult is this.
>

How many posts are in this thread now? [g]

Bear

unread,
Apr 15, 2012, 2:49:19 PM4/15/12
to
Mike Easter <Mi...@ster.invalid> wrote in news:9v0g6vFhkkU2
@mid.individual.net:

> Somehow you never got around to benchmarking your custom list and you
> also never got around to evaluating the tabular data or the conclusions.


Wrong wrong wrong. I've done everything that program is capable of.

Bear

unread,
Apr 15, 2012, 2:51:15 PM4/15/12
to
Mike Easter <Mi...@ster.invalid> wrote in news:9v0gudFp9eU1
@mid.individual.net:

> You are posting something over and over again which you fail to
> comprehend correctly.

It's not for interpretation. It is the authors description of the custom
list process. It does include the benchmark test. It does everything
described in that description. Are you saying the description is wrong?

Bear

unread,
Apr 15, 2012, 2:52:14 PM4/15/12
to
Dustin <bughunte...@gmail.com> wrote in
news:XnsA036963E670B7HHI2948AJD832@no:

> How many posts are in this thread now? [g]

Good point. This is called a troll tell. It also proves to me you know you
are wrong, just too much of a coward to admit it.

p-0^0-h the cat

unread,
Apr 15, 2012, 3:18:12 PM4/15/12
to
On Sun, 15 Apr 2012 18:45:13 GMT, Dustin <bughunte...@gmail.com> wrote:

>p-0^0-h the cat <super...@justpurrfect.invalid> wrote in
>news:iv2mo79dm2vv1q02r...@4ax.com:
>
>> On Sun, 15 Apr 2012 17:45:26 GMT, Dustin <bughunte...@gmail.com>
>> wrote:
>>
>>>Not exactly. It clearly demonstrates one of us can't process
>>>technical information very well. IE: the ranking scan isn't the
>>>benchmark scan. The ranking scan sets up the top 50 for the benchmark
>>>scan to be done later. It's not really sorting fastest/slowest until
>>>the benchmark (not ranking) test is done.
>>
>> Exactly. The ranking scan is just making a preliminary selection
>> before the more thorough 'benchmark' is performed. A bit like
>> selecting from 5000 job applications the 50 you wish to interview.
>> How difficult is this.
>>
>
>How many posts are in this thread now? [g]

We need a show of hands, and move on. Billy Bear Bottoms has been waging a war of
attrition against anyone with any serious IT experience since he came here. I guess he's
trying to lower the aggregate IQ to somewhere near his level.
Message has been deleted

p-0^0-h the cat

unread,
Apr 15, 2012, 3:40:13 PM4/15/12
to
On 15 Apr 2012 18:52:14 GMT, Bear <bearbo...@gmai.com> wrote:

>This is called a troll tell. It also proves to me you know you
>are wrong, just too much of a coward to admit it.

You still can't grasp the difference between belief, self delusion, and proof.

The modern world wasn't built by idiots, but by men and women who built up an
understanding of the way things worked by applying rigor to their thought processes,
providing proof to others of their findings by publishing repeatable experimental results,
and subjecting themselves to peer review.

You are so sad, slopping around in the primeval swamp, poncing off the work of others, and
trying to pass yourself off as something you are not. Do you seriously think that the
three of us are wrong, and you are right?

Dustin

unread,
Apr 15, 2012, 5:13:14 PM4/15/12
to
G. Morgan <seal...@osama-is-dead.net> wrote in
news:rq8mo7tbteg5i60ga...@Osama-is-dead.net:

> p-0^0-h the cat wrote:
>
>>On Sun, 15 Apr 2012 17:45:26 GMT, Dustin <bughunte...@gmail.com>
>>wrote:
>>
>>>Not exactly. It clearly demonstrates one of us can't process
>>>technical information very well. IE: the ranking scan isn't the
>>>benchmark scan. The ranking scan sets up the top 50 for the
>>>benchmark scan to be done later. It's not really sorting
>>>fastest/slowest until the benchmark (not ranking) test is done.
>>
>>Exactly. The ranking scan is just making a preliminary selection
>>before the more thorough 'benchmark' is performed. A bit like
>>selecting from 5000 job applications the 50 you wish to interview.
>>How difficult is this.
>
> I know, how many fucking times does he need to be told?

I don't think it really matters. Bear is pissed off. If it wasn't some
silly DNS benchmark, it would be something else.

Mark Warner

unread,
Apr 15, 2012, 5:22:23 PM4/15/12
to
p-0^0-h the cat wrote:
> Do you seriously think that the
> three of us are wrong, and you are right?

Sure he does. That's his MO. And a classic sign of mental illness. *He*
is the only one that knows the *Truth*. Everyone else is conspiring
against him. Textbook.

--
Mark Warner
MEPIS Linux
Registered Linux User #415318
...lose .inhibitions when replying

Scaly Ron

unread,
Apr 15, 2012, 5:35:20 PM4/15/12
to
On Sun, 15 Apr 2012 21:13:14 GMT, Dustin <bughunte...@gmail.com> wrote:

>G. Morgan <seal...@osama-is-dead.net> wrote in
>news:rq8mo7tbteg5i60ga...@Osama-is-dead.net:
>
>> p-0^0-h the cat wrote:
>>
>>>On Sun, 15 Apr 2012 17:45:26 GMT, Dustin <bughunte...@gmail.com>
>>>wrote:
>>>
>>>>Not exactly. It clearly demonstrates one of us can't process
>>>>technical information very well. IE: the ranking scan isn't the
>>>>benchmark scan. The ranking scan sets up the top 50 for the
>>>>benchmark scan to be done later. It's not really sorting
>>>>fastest/slowest until the benchmark (not ranking) test is done.
>>>
>>>Exactly. The ranking scan is just making a preliminary selection
>>>before the more thorough 'benchmark' is performed. A bit like
>>>selecting from 5000 job applications the 50 you wish to interview.
>>>How difficult is this.
>>
>> I know, how many fucking times does he need to be told?
>
>I don't think it really matters. Bear is pissed off. If it wasn't some
>silly DNS benchmark, it would be something else.

If I'm replying to the *REAL* Dustin (quick glance at header looks
good, but the defeatist tone seems out of character) I disagree.

ACF is still the best freeware resource available. It's where people come
to get answers to their "how to" freeware questions, because despite the
"loser infestation" of Bear and company, a very knowledgeable and diverse
group of people still continue to follow this NG from behind a set of
filters, and are willing to contribute when help is needed.

Consider this: If ACF is a "wasteland", why is Bear Bottoms still hanging
around here instead of moving on to yacf or some other venue? He obviously
sees some benefit to reading and posting to ACF.

To the newcomer or casual user, there's a lot of toxic waste dumped here by
Bear and the various trolls, sock puppets and forgers. My sense is that
most are alter egos of him and perhaps one other loser John Corliss? You give
them way too much credit in terms of influence.

From Agent (native in Vista or under wine in Ubuntu) I see almost none of
the garbage and only an occasional reply. Those that do reply are quoting
less, so the overall result is that their efforts are largely a waste of
time and are being ignored. I check this link once or twice a week to see
what a new or casual user sees:

http://groups.google.com/group/alt.comp.freeware/topics?lnk=srg&hl=en

Two things are worth noting when you view it:

(1) It's ridiculously easy to spot Bear's garbage threads.

(2) The threads draw hundreds of replies including those from Bear's socks,
forgers and trolls talking to/among themselves.

Bear and company are losers in life and they know it. Their lives suck
compared to most of us. The amount of time and effort they expend trying
to spoil what real contributors have put forth for the benefit of others is
typical of those who have failed in life, economically, socially,
emotionally and personally, especially those who've experienced repeated
rejections when attempting to interact with others IRL. They can't exhibit
similar behaviors face to face, but the relative distance and anonymity of
Usenet grants them license (in their own minds) to avoid responsibility for
what they do. When something happens, like losing an argument about DNS, they
blame others, failing to realize their own actions were the cause. They
exhibit similar reactions in life, blaming others for their failures rather
than correcting their own actions, then wondering why they continue to
fail.

Think about it. Look at those who surround you in your own life. Among
those who are successful, do you know any who behave like Bear? Now look
at those who are complete failures. See any similarities to me? 'Nuff said.

ACF has been around for a very long time and will continue to be around for
years to come despite what a bunch of "Goonies" try to throw at it.

--
Scaly Ron - mo...@hotmail.com
Ubuntu/Vista Dual Boot
Registered Linux User #666
Ubuntu User #666(11.10)

p-0^0-h the cat

unread,
Apr 15, 2012, 5:39:48 PM4/15/12
to
On Sun, 15 Apr 2012 22:35:20 +0100, Scaly Ron <mo...@hotmail.com> wrote:

>Bear and company are losers in life and they know it. Their lives suck
>compared to most of us. The amount of time and effort they expend trying
>to spoil what real contributors have put forth for the benefit of others is
>typical of those who have failed in life, economically, socially,
>emotionally and personally, especially those who've experienced repeated
>rejections when attempting to interact with others IRL. They can't exhibit
>similar behaviors face to face, but the relative distance and anonymity of
>Usenet grants them license (in their own minds) to avoid responsibility for
>what they do. When something happens, like losing an argument about DNS, they
>blame others, failing to realize their own actions were the cause. They
>exhibit similar reactions in life, blaming others for their failures rather
>than correcting their own actions, then wondering why they continue to
>fail.

Bit like you then Ron?

p-0^0-h the cat

unread,
Apr 15, 2012, 5:46:30 PM4/15/12
to
On Sun, 15 Apr 2012 17:22:23 -0400, Mark Warner <mhwarner.i...@gmail.com> wrote:

>p-0^0-h the cat wrote:
>> Do you seriously think that the
>> three of us are wrong, and you are right?
>
>Sure he does. That's his MO. And a classic sign of mental illness. *He*
>is the only one that knows the *Truth*. Everyone else is conspiring
>against him. Textbook.

You're probably right, but Jenny Agutter has just gone walkabout, so er ... my
concentration is straying...

Mike Easter

unread,
Apr 15, 2012, 6:05:50 PM4/15/12
to
Bear wrote:
> Mike Easter

>> Somehow you never got around to benchmarking your custom list and you
>> also never got around to evaluating the tabular data or the conclusions.
>
>
> Wrong wrong wrong. I've done everything that program is capable of.

The graphic you posted from the 'response time' tab does not have any of
the cached, uncached, or dotcom color bars red, green, and blue that
represent the results of the benchmarking itself.

Until that benchmarking process is done, there is no tabular data and no
conclusions. Your graphic 'proves'/illustrates that you hadn't done the
benchmarking when the screenshot was made.

If you want to say that you have proceeded further than the graphic you
presented for your 'evidence' then post a graphic which represents the
actual benchmarking results rather than a display of the servers without
response time bars.

Your evidence:
http://bearware.info/screenshots/DNSBenchmark000.png

--
Mike Easter

Bear

unread,
Apr 15, 2012, 6:37:52 PM4/15/12
to
Mike Easter <Mi...@ster.invalid> wrote in news:9v0v1rF1l1U1
@mid.individual.net:

> The graphic you posted from the 'response time' tab does not have any of
> the cached, uncached, or dotcom color bars red, green, and blue that
> represent the results of the benchmarking itself.

Are you daft? Did you read the quoted description for creating the custom
list?

Bear

unread,
Apr 15, 2012, 6:41:44 PM4/15/12
to
Mike Easter <Mi...@ster.invalid> wrote in news:9v0v1rF1l1U1
@mid.individual.net:

> If you want to say that you have proceeded further than the graphic you
> presented for your 'evidence' then post a graphic which represents the
> actual benchmarking results rather than a display of the servers without
> response time bars.
>
> Your evidence:
> http://bearware.info/screenshots/DNSBenchmark000.png

All anyone has to do is run the custom list process to see for themselves.

I've also posted that I've run the benchmark several times with that custom
list at different times of the day...off-peak hours Cox is fastest, peak
hours Symantec is fastest. How much faster either way, less than 1/4
second. LOL.

Mike Easter

unread,
Apr 15, 2012, 7:31:26 PM4/15/12
to
Bear wrote:
> Mike Easter

>> The graphic you posted from the 'response time' tab does not have any of
>> the cached, uncached, or dotcom color bars red, green, and blue that
>> represent the results of the benchmarking itself.
>
> Are you daft? Did you read the quoted description for creating the custom
> list?

The creation of the custom list is not a benchmark but a screen. The
result of that screen does not provide any speed 'ranking' for the
user's observation except that of selecting the first 50 from the 5000,
which 50 are viewed unranked in terms of speed or 'closeness'.



--
Mike Easter

Mark Warner

unread,
Apr 15, 2012, 7:38:24 PM4/15/12
to
Don't confuse him with the facts.

Mike Easter

unread,
Apr 15, 2012, 7:49:14 PM4/15/12
to
Bear wrote:

> I've also posted that I've run the benchmark several times with that custom
> list at different times of the day...off-peak hours Cox is fastest, peak
> hours Symantec is fastest. How much faster either way, less than 1/4
> second. LOL.

It is funny how readily and foolishly some people laugh out loud as if
to be derisive.

1/4 second would be 250 milliseconds. The typical response times for
cached information is in the order of 14 msec for the quickest DNS, and
the difference between the top two is probably not even one msec, much
less 250.

Similarly the fastest uncached data for the first two are typically
about 50-55 msec and the difference between the top two is probably only
a few msec or none.

There is much greater likelihood of there being significant differences
between the dotcom results. Mine shows 32 msec for the fastest and 411
msec for the slowest for that parameter.

There is a lot more to evaluating the character of the DNS servers than
just the fastest cached data time which is going to be within a few msec
of each other or identical, nowhere near 250 msec -- as you laugh to
yourself outloud and inappropriately, like a madman.



--
Mike Easter

Mark Warner

unread,
Apr 15, 2012, 7:52:02 PM4/15/12
to
Mike Easter wrote:
> Bear wrote:
>>
>> I've also posted that I've run the benchmark several times with that
>> custom
>> list at different times of the day...off-peak hours Cox is fastest, peak
>> hours Symantec is fastest. How much faster either way, less than 1/4
>> second. LOL.
>
> It is funny how readily and foolishly some people laugh out loud as if
> to be derisive.
>
> 1/4 second would be 250 milliseconds. The typical response times for
> cached information is in the order of 14 msec for the quickest DNS, and
> the difference between the top two is probably not even one msec, much
> less 250.

Real numbers don't matter when you're just makin' shit up.

Bear

unread,
Apr 15, 2012, 7:58:19 PM4/15/12
to
Mike Easter <Mi...@ster.invalid> wrote in
news:9v142a...@mid.individual.net:
How do you then account for this?

"This GRC DNS Benchmark is scanning 4,849 global DNS resolvers to
determine whether they are accessible and responsive from your present
location. If so, the Benchmark measures the resolver's minimum response
time, as well as whether it appears to be operating reliably and
correctly.

While this is underway, every qualifying resolver is dynamically
"ranked" from fastest to slowest. The top line of the display above
shows the minimum response time of the fastest resolver found so far, as
well as the minimum response time of the "50th fastest."

Once the ranking scan is completed, the IP addresses of those 50 fastest
qualifying resolvers will be loaded into the Benchmark so that you can
immediately perform a comprehensive analysis of their performance and
the IPs will also be written to a file named "DNSBENCH.INI", located in
the same directory as the Benchmark. This "DNSBENCH.INI" file will then
automatically be loaded whenever the Benchmark is run in the future.

All results obtained during this global resolver ranking process, and
while benchmarking, will be far more accurate if all other network usage
is minimized while the Benchmark is working."


Bear

unread,
Apr 15, 2012, 7:59:58 PM4/15/12
to
Mike Easter <Mi...@ster.invalid> wrote in
news:9v153o...@mid.individual.net:
So you agree, it's less than 1/4 second. This is very funny for me.

p-0^0-h the cat

unread,
Apr 15, 2012, 8:07:51 PM4/15/12
to
On Sun, 15 Apr 2012 19:38:24 -0400, Mark Warner <mhwarner.i...@gmail.com> wrote:

>Mike Easter wrote:
>> Bear wrote:
>>> Mike Easter
>>>>
>>>> The graphic you posted from the 'response time' tab does not have any of
>>>> the cached, uncached, or dotcom color bars red, green, and blue that
>>>> represent the results of the benchmarking itself.
>>>
>>> Are you daft? Did you read the quoted description for creating the custom
>>> list?
>>
>> The creation of the custom list is not a benchmark but a screen. The
>> result of that screen does not provide any speed 'ranking' for the
>> user's observation except that of selecting the first 50 from the 5000,
>> which 50 are viewed unranked in terms of speed or 'closeness'.
>
>Don't confuse him with the facts.

Facts are for schmucks, the Bearpair is immaculate, Bearpair theory is immutable, its
falsifiability unthinkable.

Bearpair methodology transcends using any method at all.

Mike Easter

unread,
Apr 15, 2012, 8:48:25 PM4/15/12
to
Bear wrote:
> Mike Easter

>> The creation of the custom list is not a benchmark but a screen. The
>> result of that screen does not provide any speed 'ranking' for the
>> user's observation except that of selecting the first 50 from the
>> 5000, which 50 are viewed unranked in terms of speed or 'closeness'.
>>
>>
>>
>
> How do you then account for this?
>
> "This GRC DNS Benchmark is scanning 4,849 global DNS resolvers to
> determine whether they are accessible and responsive from your present
> location. If so, the Benchmark measures the resolver's minimum response
> time, as well as whether it appears to be operating reliably and
> correctly.

The tool's screen flashes by more than 2 IPs per second.

> While this is underway, every qualifying resolver is dynamically
> "ranked" from fastest to slowest. The top line of the display above
> shows the minimum response time of the fastest resolver found so far, as
> well as the minimum response time of the "50th fastest."

While the qualifying resolvers may be ranked internally by the metric
used by the screening process, the user is not provided the specific
information from that metric. The user is only provided the list of the
50 fastest/closest servers but not the number that caused the individual
server to achieve that position and not the relative position of any of
the servers to any of the others.

That ranking is only achieved when the list is used to do the benchmark.
The above process is a selection or screening process, not a benchmark.

> Once the ranking scan is completed, the IP addresses of those 50 fastest
> qualifying resolvers will be loaded into the Benchmark so that you can
> immediately perform a comprehensive analysis of their performance and
> the IPs will also be written to a file named "DNSBENCH.INI", located in
> the same directory as the Benchmark. This "DNSBENCH.INI" file will then
> automatically be loaded whenever the Benchmark is run in the future.

Those qualifying resolvers are not ranked by the screening process
against each other and no speed metric is provided to the user at the
stage of creating the .ini file. The user only knows what the 50 IPs
and those which have names are, the user is provided zero information
about how they compared to each other or what number was used to rank
them.

The 'ranking scan' does *not* give the user the rank, it only gives the
user the IPs of the 50 as a 'mass'.

> All results obtained during this global resolver ranking process, and
> while benchmarking, will be far more accurate if all other network usage
> is minimized while the Benchmark is working."

The accuracy is desirable so that the 'right'/best/ network closest/
fastest 50 will be chosen, but accurate or not, the user will not see
any actual numbers or rankings for any of those 50 against each other
until the user runs the actual benchmark.
>
>


--
Mike Easter

Mike Easter

unread,
Apr 15, 2012, 8:56:26 PM4/15/12
to
Mike Easter wrote:

> Those qualifying resolvers are not ranked by the screening process
> against each other

... in a manner visible to the user, except that the user is informed of
the fastest/closest 50 en masse, not ranked against each other.

> and no speed metric is provided to the user at the
> stage of creating the .ini file. The user only knows what the 50 IPs and
> those which have names are, the user is provided zero information about
> how they compared to each other or what number was used to rank them.
>
> The 'ranking scan' does *not* give the user the rank, it only gives the
> user the IPs of the 50 as a 'mass'.

You might say that the scanning tool keeps the metric expression of time
per IP a secret from the user (except minimum and maximum for the 50 at
the end). The scanning tool also keeps the metric expression per IP of
the 200 results which are sent to Gibson's site a secret from the user.

The user can only see the minimum and maximum times being tallied during
the scanning process, as well as other numbers such as how many
resolved, how many refused, how many no-replied, and which IP is being
scanned at any given instant as they flash by.


--
Mike Easter
0 new messages