Worth of a DNSBL

10 views
Skip to first unread message

Larry M. Smith

unread,
Aug 3, 2007, 6:33:17 PM8/3/07
to
It occurred to me some time ago that we really should have a valuation
system for DNSBLs. Some formula that mail admins can just plug in the
numbers from their own systems and have it spit out a value that can be
used to help them determine if a list would help them or not.

So for starters we have two DNSBLs that are both the most accurate and
most inaccurate at identifying spam and misidentifying spam;

nofalsenegative.stopspam.samspade.org = lists everything.
nofalsepositive.stopspam.samspade.org = lists nothing.

I'm sure that most[1] would say that neither of these lists have any
value as a production DNSBL. But I'm sure than anyone that used
nofalsenegative to reject during the SMTP conversation would find
themselves out of a job; it is a detriment to use this list. So in
other words; is has negative value, very high negative value. Any
valuation formula would need to take into account that false positives
are weighted higher than false negatives and not simply subtract false
from true positives.

For scoring systems, false positives are scored lighter, but then again
it would also lower the value of the true positives... The two sides
wash, resulting in lowering the overall value.

So I guess a DNSBL valuation formula would look something like this;

( percent_true_positives - (percent_false_positives * 10 )) * (
spam_score / threshold) = value

If using to reject during SMTP, spam_score and threshold are the same value.

nofalsepositive = 0% spam_hits, 0% FPs = 0
nofalsenegitive = 100% spam_hits, 100% FPs = -900

According to Al Iverson's project stats.dnsbl.com;

TPs FPs Score
Spamhaus ZEN 81% 0% 81
Spamcop 51% 0% 51
PSBL 35% 0% 35
APEWS 82% 20% -118
FiveTen 42% 46% -418

FP*10 might be a little high, but I just can't imagine a false positive
rate of one message in every ten having any value... I can however see
value in a DNSBL that identifies one spam in every ten with no false
positives.


SgtChains

[1] Ummm... I shouldn't need to quatify "most" here.

--
Comments posted to news.admin.net-abuse.blocklisting
are solely the responsibility of their author. Please
read the news.admin.net-abuse.blocklisting FAQ at
http://www.blocklisting.com/faq.html before posting.

Matthew Sullivan

unread,
Aug 3, 2007, 9:12:04 PM8/3/07
to
Larry M. Smith wrote:
>
> So I guess a DNSBL valuation formula would look something like this;
>
> ( percent_true_positives - (percent_false_positives * 10 )) * (
> spam_score / threshold) = value
>
> If using to reject during SMTP, spam_score and threshold are the same
> value.
>
> nofalsepositive = 0% spam_hits, 0% FPs = 0
> nofalsenegitive = 100% spam_hits, 100% FPs = -900


That's in interesting formula, I like it.


> According to Al Iverson's project stats.dnsbl.com;


Need to find something closer to reality to run your formula against.

I thought there was someone producing sane stats previously and based on
a real email system...?

Regards,

Mat

Seth

unread,
Aug 3, 2007, 10:21:17 PM8/3/07
to
In article <46b3ac88$0$3153$ae4e...@news.nationwide.net>,

Larry M. Smith <SgtChains-...@FahQ2.com> wrote:

>So I guess a DNSBL valuation formula would look something like this;
>
>( percent_true_positives - (percent_false_positives * 10 )) * (
>spam_score / threshold) = value
>
>If using to reject during SMTP, spam_score and threshold are the same value.

What are they if otherwise?

>nofalsepositive = 0% spam_hits, 0% FPs = 0
>nofalsenegitive = 100% spam_hits, 100% FPs = -900

So you're defining false positives and false negatives relative to the
appropriate kind of mail. That's fine, it just needs specifying.
(Otherwise nofalsenegatives is 90% true positive, 10% false positive,
given that spam is 90% of email.)

>FP*10 might be a little high,

I think it's low. Would you rather get 10 spams or lose one real
message? (Especially since the lost real messages tend to be
personal, since mailing lists get whitelisted easily enough when you
subscribe or at the first sign of trouble.)

> but I just can't imagine a false positive
>rate of one message in every ten having any value... I can however see
>value in a DNSBL that identifies one spam in every ten with no false
>positives.

Anything that identifies some spam with no false positives has value
(unless the spam would otherwise be blocked anyway).

Seth

huey.c...@gmail.com

unread,
Aug 4, 2007, 7:50:24 AM8/4/07
to
Matthew Sullivan <usene...@sorbs.net> wrote:
> I thought there was someone producing sane stats previously and based on
> a real email system...?

The only other public stats I'm aware of are Jeff Makey's at the San
Diego Supercomputer Center, but those only count raw hits, and have no
indication of false positives and false negatives. As such, they're only
loosely tied to DNSBL effectiveness, and also imply raw size. Stats
are here: http://www.sdsc.edu/~jeff/spam/cbc.html

Interesting that spamhaus zen still has more hits than APEWS level 2.

--
Huey

Matthias Leisi

unread,
Aug 4, 2007, 9:25:20 AM8/4/07
to
Matthew Sullivan schrieb:

>> ( percent_true_positives - (percent_false_positives * 10 )) * (
>> spam_score / threshold) = value
>>

[..]


> Need to find something closer to reality to run your formula against.
>
> I thought there was someone producing sane stats previously and based on
> a real email system...?

Do you mean SpamAssassin's rule QA system
(http://ruleqa.spamassassin.org/)? Select one of the "Network
Mass-Checks", otherwise there won't be any DNSBL rules (click on the
"Name" table heading and search for "RCVD_IN").

Btw., SpamAssassin does true/false positive/negative scoring already
(see the Spam%, Ham%, S/O etc in the QA system). And it does that not
exclusively for DNSBLs, but for any type of rule (although, admittedly,
highly SA-specific and possibly difficult to reuse in a more generic
environment).

-- Matthias

--
http://www.dnswl.org/

Herb Oxley

unread,
Aug 4, 2007, 9:33:19 AM8/4/07
to
huey.c...@gmail.com wrote:

> Interesting that spamhaus zen still has more hits than APEWS level 2.

Simply (IMO), Spamhaus ZEN is a sniper rifle, APEWS (and SPEWS) are
blunderbuss'.

Hopefully the SBL data feed fees are helping to fund full time analyst
positions.

--
Herb Oxley

Matthew Sullivan

unread,
Aug 4, 2007, 2:00:45 PM8/4/07
to
huey.c...@gmail.com wrote:
> Matthew Sullivan <usene...@sorbs.net> wrote:
>> I thought there was someone producing sane stats previously and based on
>> a real email system...?
>
> The only other public stats I'm aware of are Jeff Makey's at the San
> Diego Supercomputer Center, but those only count raw hits, and have no
> indication of false positives and false negatives. As such, they're only
> loosely tied to DNSBL effectiveness, and also imply raw size. Stats
> are here: http://www.sdsc.edu/~jeff/spam/cbc.html


For some reason I think there is something else....


> Interesting that spamhaus zen still has more hits than APEWS level 2.


That is not a good thing to say. It can imply that APEWS is not
covering as much as ZEN.

/ Mat

Al

unread,
Aug 4, 2007, 8:19:20 PM8/4/07
to
On Aug 3, 5:33 pm, "Larry M. Smith" <SgtChains-usenet2...@FahQ2.com>
wrote:

> It occurred to me some time ago that we really should have a valuation
> system for DNSBLs. Some formula that mail admins can just plug in the
> numbers from their own systems and have it spit out a value that can be
> used to help them determine if a list would help them or not.

[...]

> ( percent_true_positives - (percent_false_positives * 10 )) * (
> spam_score / threshold) = value

This is a lot like what I'm thinking of for the DNSBL.com data.
Something like SenderScore has a "score" for a sending mail system,
why not a similar "score" for a DNSBL based on a set of criteria like
this.

I think you're on to something here.

Like you and others have pointed out, you gotta have SOMETHING to
offset or more accurately measure, beyond "how much spam does it
block," since you can easily block 100% of spam by blocking all mail,
and that doesn't account for accuracy.

As far as "sane" stats as opposed to mine, it seems like everybody
else's reporting of false positives leans toward the anecdotal and not
reliably measurable.

huey.c...@gmail.com

unread,
Aug 5, 2007, 9:49:17 AM8/5/07
to
Matthew Sullivan <usene...@sorbs.net> wrote:

> huey.c...@gmail.com wrote:
> > The only other public stats I'm aware of are Jeff Makey's at the San
> > Diego Supercomputer Center, but those only count raw hits, and have
> > no indication of false positives and false negatives. As such,
> > they're only loosely tied to DNSBL effectiveness, and also imply raw
> > size. Stats are here: http://www.sdsc.edu/~jeff/spam/cbc.html
> > Interesting that spamhaus zen still has more hits than APEWS level 2.
> That is not a good thing to say. It can imply that APEWS is not
> covering as much as ZEN.

Andrew from Supernews recently posted in n.a.n-a.e that APEWS is on the
order of thirty-eight times larger than SPEWS at its largest, meaning
that it's at least an order of magnitude larger than spamhaus zen.

If Spamhaus is a scalpel that only cuts around a tumor, APEWS is a
nuclear weapon that obliterates an entire city. Since there is no 'city
of spammers' (popular perception of Verizon or Boca Raton
notwithstanding) I find this to be entirely the wrong weapon.

--
Huey

Larry M. Smith

unread,
Aug 5, 2007, 10:14:58 AM8/5/07
to
Seth wrote:
(snip)

>> nofalsepositive = 0% spam_hits, 0% FPs = 0
>> nofalsenegitive = 100% spam_hits, 100% FPs = -900
>
> So you're defining false positives and false negatives relative to the
> appropriate kind of mail. That's fine, it just needs specifying.
> (Otherwise nofalsenegatives is 90% true positive, 10% false positive,
> given that spam is 90% of email.)

Correct.

"Spam_hits" = percent of spam identified as spam, AKA true positives.
"FPs" = percent of "ham" identified as spam, false positives.

So in the example, nofalsenegitive identifies all spam as spam, and it
also identifies all non-spam as spam. The result is 100% and 100%

Hal Murray

unread,
Aug 5, 2007, 10:16:56 AM8/5/07
to
>Like you and others have pointed out, you gotta have SOMETHING to
>offset or more accurately measure, beyond "how much spam does it
>block," since you can easily block 100% of spam by blocking all mail,
>and that doesn't account for accuracy.

>As far as "sane" stats as opposed to mine, it seems like everybody
>else's reporting of false positives leans toward the anecdotal and not
>reliably measurable.

I think a "good" formula will require data that isn't easy to get.

Suppose I decide false negatives are worth x points and false positives
are worth y points. Inorder to compute the total goodness of a DNSBL
you will need the number of spam/ham messages from each source that
your DNSBL might catch/miss.

For example, if I don't get any valid mail from hotmail or wanadoo
then blocking them all is a good thing. If you do get some mail
from them, you probably don't want to block them.

Another possible complication... There are several/many block lists
that I might use. If N of them would flag the same message, maybe
they should only get 1/N-th credit each. That means a list that gets
junk that others don't might be important. (mumble)

--
These are my opinions, not necessarily my employer's. I hate spam.

Andrew - Supernews

unread,
Aug 5, 2007, 12:02:43 PM8/5/07
to
On 2007-08-05, huey.c...@gmail.com <huey.c...@gmail.com> wrote:
> Matthew Sullivan <usene...@sorbs.net> wrote:
>> huey.c...@gmail.com wrote:
>> > The only other public stats I'm aware of are Jeff Makey's at the San
>> > Diego Supercomputer Center, but those only count raw hits, and have
>> > no indication of false positives and false negatives. As such,
>> > they're only loosely tied to DNSBL effectiveness, and also imply raw
>> > size. Stats are here: http://www.sdsc.edu/~jeff/spam/cbc.html
>> > Interesting that spamhaus zen still has more hits than APEWS level 2.
>> That is not a good thing to say. It can imply that APEWS is not
>> covering as much as ZEN.
>
> Andrew from Supernews recently posted in n.a.n-a.e that APEWS is on the
> order of thirty-eight times larger than SPEWS at its largest, meaning
> that it's at least an order of magnitude larger than spamhaus zen.

Only for small orders of magnitude - PBL is about 350 million IPs, which is
a bit over half the size of APEWS (discounting APEWS' listings of nonrouted
space). The other components of zen are negligible by comparison.

However, the intersection of PBL and APEWS is only about 230 million IPs,
so PBL has around 120 million IPs listed that APEWS does not.

--
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

Larry M. Smith

unread,
Aug 5, 2007, 2:57:16 PM8/5/07
to
Hal Murray wrote:
(snip)

> For example, if I don't get any valid mail from hotmail or wanadoo
> then blocking them all is a good thing. If you do get some mail
> from them, you probably don't want to block them.
>

Nothing is perfect. Then again, using the formula against *your own*
data would show *your own* stats.

> Another possible complication... There are several/many block lists
> that I might use. If N of them would flag the same message, maybe
> they should only get 1/N-th credit each. That means a list that gets
> junk that others don't might be important. (mumble)
>

Aggregating multiple lists would require aggregate data. In other words
it would shake out the overlap if you counted both as one. I'll just
make some data up and say that Zen+APEWS might catch, as an aggregate,
85% of spam with 20% FPs. But using the same completely imaginary data
Zen+Spamcop might also catch 85% of the spam with 0% FPs. Both of these
bogus datasets would show an additional 5% spam hits over Zen alone.


SgtChains

Christian Rossow

unread,
Aug 7, 2007, 8:00:43 AM8/7/07
to
On Aug 4, 1:33 am, "Larry M. Smith" <SgtChains-usenet2...@FahQ2.com>
wrote:

> It occurred to me some time ago that we really should have a valuation
> system for DNSBLs. Some formula that mail admins can just plug in the
> numbers from their own systems and have it spit out a value that can be
> used to help them determine if a list would help them or not.
Well, I have had the same idea since a few months, but the data for
this is still missing. The way you weight FP and FN can be discussed,
but the main approach is nice.

> According to Al Iverson's project stats.dnsbl.com;

IMO you can't base this kind of calculation on the FP stats published
by Al. (As often discussed) they don't represent common email traffic.

For this reason I started some time ago a discussion about another
(likely better) approach of measuring FP rates (see thread "next
generation Hamtrap" in NANAE). This might be the "sane data" Matthew
tried to remember, but please correct me if I'm wrong.

The concept of this hamtrap is implemented, but I didn't spend time
until now to feed it with real data (i.e. mailing lists). However,
this discussion increases my motivation to do so. Once useful FP data
is available (might be end of August), I'll proceed to the next step:
building up such a DNSBL valuation system as you described.

Anyway, my research team and I want and will provide you an
implementation. But until then I can't do more than asking for your
patience until then.

Cheers,
Christian

--
Christian Rossow
Team e-mail security
Institute for Internet Security
University of Applied Sciences Gelsenkirchen
45877 Gelsenkirchen (Germany)
https://www.internet-sicherheit.de

Larry M. Smith

unread,
Aug 7, 2007, 11:12:16 AM8/7/07
to
Christian Rossow wrote:
> On Aug 4, 1:33 am, "Larry M. Smith" <SgtChains-usenet2...@FahQ2.com>
> wrote:
>> It occurred to me some time ago that we really should have a valuation
>> system for DNSBLs. Some formula that mail admins can just plug in the
>> numbers from their own systems and have it spit out a value that can be
>> used to help them determine if a list would help them or not.
> Well, I have had the same idea since a few months, but the data for
> this is still missing. The way you weight FP and FN can be discussed,
> but the main approach is nice.
>
>> According to Al Iverson's project stats.dnsbl.com;
> IMO you can't base this kind of calculation on the FP stats published
> by Al. (As often discussed) they don't represent common email traffic.
>

Nor does Al Iverson's sample appear to be large enough to display
anything other than Al Iverson's view of the spam problem... However,
he is currently the only one that I know of attempting to quantify FPs
and openly publishing the results. Despite the size problem, the data
does give us some insight as to how well some DNSBLs are operating.

Obtaining dataset large enough to be reflective of global traffic would
be a herculean task in itself. Consider also that you can't have the
users help you out with "this is spam" and "this is not-spam" buttons...
Just ask anyone that has an AOL feedback loop, the users often
misreport ham as spam, and possibly even spam as ham.

> For this reason I started some time ago a discussion about another
> (likely better) approach of measuring FP rates (see thread "next
> generation Hamtrap" in NANAE). This might be the "sane data" Matthew
> tried to remember, but please correct me if I'm wrong.
>
> The concept of this hamtrap is implemented, but I didn't spend time
> until now to feed it with real data (i.e. mailing lists). However,
> this discussion increases my motivation to do so. Once useful FP data
> is available (might be end of August), I'll proceed to the next step:
> building up such a DNSBL valuation system as you described.
>

I would need to review that thread before commenting on it fully, but my
belief is that mailing list traffic wouldn't be representative of "real
world" email use. This is just a guess on my part, but I would believe
that most email users don't subscribe to anything that most here would
describe as a "mailing list"


SgtChains

huey.c...@gmail.com

unread,
Aug 7, 2007, 4:44:16 PM8/7/07
to
Larry M. Smith <SgtChains-...@fahq2.com> wrote:
> Obtaining dataset large enough to be reflective of global traffic
> would be a herculean task in itself. Consider also that you can't
> have the users help you out with "this is spam" and "this is not-spam"
> buttons...
> Just ask anyone that has an AOL feedback loop, the users often
> misreport ham as spam, and possibly even spam as ham.

Not strictly true. Yes, the end-users are wrong, but they're wrong at
fairly predictable rates. AOL has never released their acceptable
margins, but at the last FTC forum, Sender Score Certified indicated
that their acceptable complaint rates were something like "between 0.4%
and 2.9%", if I recall correctly. Jon Praed pointed out (and correctly)
that "ten million people can't be wrong". If you make it easy for your
users to give you feedback, they will, and regardless of the inherent
untrustworthiness of any single end-user complaint, if you get hundreds
or thousands of complaints all from different users, that's a pretty
reliable indicator for "somethin wrong with mail from these guys", and
that is a VITAL piece of information for determining ham vs. spam.

And it also fails gracefully, since even if a whole bunch of users
report ham as spam or spam as ham, you can still base your whitelisting
and blacklisting decisions on those mistakes. Even if they're all wrong,
you're still doing what your users want, and when the sender calls you
up and says "But all my mail is confirmed opt-in! All of those users
signed up!", you can tell them "My users have indicated that they don't
want your mail, and I'm not inclined to argue with them".

--
Huey

DevilsPGD

unread,
Aug 8, 2007, 6:42:22 AM8/8/07
to
In message <Fv2dnQgs6KWAeyXb...@speakeasy.net>
huey.c...@gmail.com wrote:

>And it also fails gracefully, since even if a whole bunch of users
>report ham as spam or spam as ham, you can still base your whitelisting
>and blacklisting decisions on those mistakes. Even if they're all wrong,
>you're still doing what your users want, and when the sender calls you
>up and says "But all my mail is confirmed opt-in! All of those users
>signed up!", you can tell them "My users have indicated that they don't
>want your mail, and I'm not inclined to argue with them".

Unfortunately, this isn't true -- Users don't always fail in the same
ways. (In other words, one man's ham is another's spam)

I have one AOL recipient who frequently requests quotes from one of my
hosting customers (the quote request is CC'd to a dozen or so different
companies), then flags all but the one they accept as spam.

Most annoying. I assume this is average behaviour for AOL end users.

--
Americans couldn't be any more self-absorbed if they were made from equal
parts water and papertowel.
-- Dennis Miller

Christian Rossow

unread,
Aug 8, 2007, 6:43:16 AM8/8/07
to
On Aug 7, 6:12 pm, "Larry M. Smith" <SgtChains-usenet2...@FahQ2.com>
wrote:

> Nor does Al Iverson's sample appear to be large enough to display
> anything other than Al Iverson's view of the spam problem...
I agree that < 100 ham mails per measurement are a weak base. My goal
is having more than 1,000 hams (with unique sender addresses) per day.

> I would need to review that thread before commenting on it fully

Please, have a look.

> but my
> belief is that mailing list traffic wouldn't be representative of "real
> world" email use. This is just a guess on my part, but I would believe
> that most email users don't subscribe to anything that most here would
> describe as a "mailing list"

Of course mailing list traffic isn't exactly the traffic John Doe
expects as private mail traffic. However, the hamtrap doesn't look for
contents of the mail and cares about sending MTAs only. I.e. the
content of the mails isn't important, but the smarthosts of the
senders (that are used for 'regular' mail traffic as well).

Regards,
Christian

Al

unread,
Aug 8, 2007, 11:58:18 AM8/8/07
to
On Aug 7, 10:12 am, "Larry M. Smith" <SgtChains-usenet2...@FahQ2.com>
wrote:

> Obtaining dataset large enough to be reflective of global traffic would


> be a herculean task in itself. Consider also that you can't have the
> users help you out with "this is spam" and "this is not-spam" buttons...
> Just ask anyone that has an AOL feedback loop, the users often
> misreport ham as spam, and possibly even spam as ham.

Well, it's rare that I would disagree with Larry, but I actually do
work with many entities that have AOL feedback loops, and the data's
insanely insightful.

Like you say, mistakes do happen, and goobers do misreport. But, ISPs
with feedback loops generally don't block an IP based on one complaint
from an end user. But if the complaints spike far above the expected
noise level of complaints from average mailings, then they often will
consider that sending IP a candidate for blocking.

The value for "far above" varies, but overall, it's far more discreet
and detailed than the "one spamtrap hit and you're dead" methodology
of some DNSBLs.

I guess I think of it as being able to see the world out through the
tiny window if your own mail server (as a random guy running a
blacklist often is), versus the millions of very large windows into
mailboxes that an AOL has. If AOL took all that reputation data and
shared it out to the world as a DNSBL? We'd kill spam dead, and quick.
Too bad it's too legally risky (or at least perceived as such by ISPs
sitting on data like this).

BTW: This, in a nutshell, is what used to irritate me about Spamcop in
years past. They triggered BL entries based on very low complaints,
either because they set thresholds far too low that could be gamed by
false complaints, or their world view was limited to the point that it
screwed up their perception of the math involved to calculate this
stuff correctly. (All since fixed, seemingly...)

Al

huey.c...@gmail.com

unread,
Aug 8, 2007, 11:52:07 AM8/8/07
to
DevilsPGD <spam_na...@crazyhat.net> wrote:
> huey.c...@gmail.com wrote:
> > And it also fails gracefully, since even if a whole bunch of users
> > report ham as spam or spam as ham, you can still base your
> > whitelisting and blacklisting decisions on those mistakes. Even if
> > they're all wrong, you're still doing what your users want, and when
> > the sender calls you up and says "But all my mail is confirmed
> > opt-in! All of those users signed up!", you can tell them "My users
> > have indicated that they don't want your mail, and I'm not inclined
> > to argue with them".
> Unfortunately, this isn't true -- Users don't always fail in the same
> ways. (In other words, one man's ham is another's spam)
> I have one AOL recipient who frequently requests quotes from one of my
> hosting customers (the quote request is CC'd to a dozen or so
> different companies), then flags all but the one they accept as spam.
> Most annoying. I assume this is average behaviour for AOL end users.

It is true. Note that I said 'a whole bunch of users', while you said "I
have one AOL recipient". The plural of 'anecdote' is not 'data', but if
a large number (i.e: hundreds or thousands) of users all indicate that
something is unwanted, whether or not it's spam, it makes sense to block
it. This is why AOL does not base their blocking decisions based on "I
have one AOL recipient".

Samspade.org reported that their confirmed opt-in mailing list of people
interested in antispam tools (a list which one would EXPECT to generally
be more clued-in to these issues) received single-digit percentages of
complaints. So, yes, plenty of individual users can and will be wrong.
But, in the aggregate, the majority of the users are right -- and even
if they're not, it makes good business sense to listen to them.

--
Huey

nosp...@trashymail.com

unread,
Aug 8, 2007, 7:57:26 PM8/8/07
to
Al <aliversonch...@gmail.com> wrote:

> If AOL took all that reputation data and
> shared it out to the world as a DNSBL? We'd kill spam dead, and quick.
> Too bad it's too legally risky (or at least perceived as such by ISPs
> sitting on data like this).

Too bad AOL couldn't stealthily resurrect SPEWS - using the AOL data!

--

Herb Oxley

Shmuel (Seymour J.) Metz

unread,
Aug 10, 2007, 10:01:50 AM8/10/07
to
In <Fv2dnQgs6KWAeyXb...@speakeasy.net>, on 08/07/2007

at 08:44 PM, huey.c...@gmail.com said:

>And it also fails gracefully, since even if a whole bunch of users
>report ham as spam or spam as ham, you can still base your
>whitelisting and blacklisting decisions on those mistakes. Even if
>they're all wrong, you're still doing what your users want,

You're assuming that the user who reports ham as spam will accept
responsibility for the consequences. There have been reports of users
who were upset by blocking that they requested.

--
Shmuel (Seymour J.) Metz, truly insane Spews puppet
<http://patriot.net/~shmuel>

I reserve the right to publicly post or ridicule any abusive
E-mail. Reply to domain Patriot dot net user shmuel+news to contact
me. Do not reply to spam...@library.lspace.org

huey.c...@gmail.com

unread,
Aug 11, 2007, 3:32:27 AM8/11/07
to
"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote:
> huey.c...@gmail.com said:
> > And it also fails gracefully, since even if a whole bunch of users
> > report ham as spam or spam as ham, you can still base your
> > whitelisting and blacklisting decisions on those mistakes. Even if
> > they're all wrong, you're still doing what your users want,
> You're assuming that the user who reports ham as spam will accept
> responsibility for the consequences. There have been reports of users
> who were upset by blocking that they requested.

Yes, but as Jon Praed said at the FTC conference last month, "Ten
million users can't be wrong".

Suppose I'm a big ISP with a 'report this as spam' button in my UI, and
you're a small mailer whose list includes 10,000 recipients at my ISP.
Now, I know that, on average, 20% of my users can be counted on to
report a spam, 2% of my users can be counted on to report ham as spam,
and .2% of my users will complain if they don't get a legitimate
email.[1] I can then set my spam-filtering systems to start dropping
connections from anyone who goes over a threshhold of, say, 2.5%
complaint rate.

If your list is only 2.75% dirty - 275 people who didn't solicit it, and
9725 who did - you get through the filter, I get 54 legit spam
complaints, and 196 bogus complaints. But, any dirtier than that, and
you start to get blocked.

Around 3.2% dirty, my filter cuts you off at 295 RCPT TOs before the end
of your list, at which point you've sent me 9394 legit emails and 311
spams. I get 62 legit complaints and 188 bogus complaints. And one of
those 295 users who didn't get the email is the first to complain about
legit mail being blocked.

At 4% dirty, my filter cuts you off at 92% of your list. You've sent me
8824 legit emails, and 368 spams. I get 74 legit complaints, 176
bogus complaints, and the second user out of that 8% that didn't get
delivered complains about not getting wanted email.

At around 17% dirty, my filter cuts you off halfway through your list.
You've sent me 4100 legit mails and 800 spams, from which I get 170
legit complaints, 80 bogus complaints, and nine complaints about
blocking wanted mail.

I'm not going to bother doing multivariable calculus to find how to
minimize complaints, because all of the inherent assumptions are going
to be different for each big ISP, depending on how well they've educated
their userbase, how hard their domain is to attack, how old their user
list is, how exposed their user list is, how leaked their user list is,
how easy it is for those users to report spam or complain about blocked
wanted mail, and so on. The point is that, even knowing that some
predictable percentage of your users is going to be part of the solution
and some other percentage is going to be part of the problem, you can
apply relatively simple math to determine what a sensible complaint-rate
threshhold is for blocking, even given those folks who are unreliable.
Yes, they're unreliable, but taken on a scale of thousands, they're
unreliable in predictable ways.

[1] I bet that's high. Generally the only people who can be counted on
to complain are the mailers, but minimizing this number is still
important, given that the point of an email provider is to provide
email, not just block spam.

--
Huey

Matthias Leisi

unread,
Aug 11, 2007, 5:26:24 PM8/11/07
to
huey.c...@gmail.com schrieb:

> Suppose I'm a big ISP with a 'report this as spam' button in my UI, and
> you're a small mailer whose list includes 10,000 recipients at my ISP.
> Now, I know that, on average, 20% of my users can be counted on to
> report a spam, 2% of my users can be counted on to report ham as spam,

In my environment (corporate users), around 1/3 of user submissions are
"wrong" (ie ham as spam, spam as ham). Under these circumstances,
automated decisions / filter adaptation is not feasible.

-- Matthias

--
http://www.dnswl.org/ - Protect against false positives

Chris Lewis

unread,
Aug 13, 2007, 11:40:41 AM8/13/07
to
According to Matthias Leisi <matt...@leisi.net>:

> huey.c...@gmail.com schrieb:
>
> > Suppose I'm a big ISP with a 'report this as spam' button in my UI, and
> > you're a small mailer whose list includes 10,000 recipients at my ISP.
> > Now, I know that, on average, 20% of my users can be counted on to
> > report a spam, 2% of my users can be counted on to report ham as spam,
>
> In my environment (corporate users), around 1/3 of user submissions are
> "wrong" (ie ham as spam, spam as ham). Under these circumstances,
> automated decisions / filter adaptation is not feasible.

Strange. In our corporate environment (forwarding spam to special
address, no "ham" reporting mechanisms), it'd have to be < 1%.
--
Chris Lewis,

Age and Treachery will Triumph over Youth and Skill
It's not just anyone who gets a Starship Cruiser class named after them.

DevilsPGD

unread,
Aug 13, 2007, 3:20:20 PM8/13/07
to
In message <13c118e...@corp.supernews.com>
cle...@nortelnetworks.com (Chris Lewis) wrote:

>According to Matthias Leisi <matt...@leisi.net>:
>> huey.c...@gmail.com schrieb:
>>
>> > Suppose I'm a big ISP with a 'report this as spam' button in my UI, and
>> > you're a small mailer whose list includes 10,000 recipients at my ISP.
>> > Now, I know that, on average, 20% of my users can be counted on to
>> > report a spam, 2% of my users can be counted on to report ham as spam,
>>
>> In my environment (corporate users), around 1/3 of user submissions are
>> "wrong" (ie ham as spam, spam as ham). Under these circumstances,
>> automated decisions / filter adaptation is not feasible.
>
>Strange. In our corporate environment (forwarding spam to special
>address, no "ham" reporting mechanisms), it'd have to be < 1%.

Forwarding (via attachment, I'd assume) is well beyond the capabilities
of most of my users -- They are moderately capable of clicking on a
"This is spam" button.

--
Americans couldn't be any more self-absorbed if they were made from equal
parts water and papertowel.
-- Dennis Miller

--

Chris Lewis

unread,
Aug 14, 2007, 11:53:17 PM8/14/07
to
According to DevilsPGD <spam_na...@crazyhat.net>:

> In message <13c118e...@corp.supernews.com>
> cle...@nortelnetworks.com (Chris Lewis) wrote:
>
> >According to Matthias Leisi <matt...@leisi.net>:
> >> huey.c...@gmail.com schrieb:
> >>
> >> > Suppose I'm a big ISP with a 'report this as spam' button in my UI, and
> >> > you're a small mailer whose list includes 10,000 recipients at my ISP.
> >> > Now, I know that, on average, 20% of my users can be counted on to
> >> > report a spam, 2% of my users can be counted on to report ham as spam,
> >>
> >> In my environment (corporate users), around 1/3 of user submissions are
> >> "wrong" (ie ham as spam, spam as ham). Under these circumstances,
> >> automated decisions / filter adaptation is not feasible.
> >
> >Strange. In our corporate environment (forwarding spam to special
> >address, no "ham" reporting mechanisms), it'd have to be < 1%.
>
> Forwarding (via attachment, I'd assume) is well beyond the capabilities
> of most of my users -- They are moderately capable of clicking on a
> "This is spam" button.

For most of our users this is equivalent to a "this is spam" button.

We supply a Outlook plugin to our users. They select the email and
hit a button. The plugin does the forwarding/wrapping.

Without the plugin, forwarding from outlook so you have full headers
is just too damn hard.

People using real mail readers just do forwards.

The plugin is nice. Spamsource by Daesoft. Personal use free.
Site-wide license for corporate use very cheap.
--
Chris Lewis,

Age and Treachery will Triumph over Youth and Skill
It's not just anyone who gets a Starship Cruiser class named after them.

--

axlq

unread,
Aug 16, 2007, 1:46:42 PM8/16/07
to
In article <13c424m...@corp.supernews.com>,

Chris Lewis <cle...@nortelnetworks.com> wrote:
>Without the plugin, forwarding from outlook so you have full headers
>is just too damn hard.

Not really.
1. Click "New mail" to open a new mail editing window.
2. Drag the icon for the spam email into the new mail editing window.
3. The spam will now be an attachment with full headers and all. Forward
as desired.

It takes a bit of training I suppose, but that's what our company
recommends we do, to forward any spam to our company spam-collector
address for filter updating.

-A

Reply all
Reply to author
Forward
0 new messages