>And which products would you recommend for each level of defense ?
CSE, Ltd., develops a range of products on different platforms to support
the strategy I have described. We have released anti-virus products every
month since first quarter 1988, and we switched from a mainly scanner-based
technology to a mainly behaviour-checker based technology in 1992 after
realising that scanning workstations on a regular basis would - bearing the
explosive increase in the number of vira in mind - soon become so
time-consuming that it would not be a cost-effective approach for protecting
large networks. (Since boot sector/MBR vira cause at least 50% of the virus
damage, scanning only servers in client/server setups is obviously not the
solution either). As a one time developer of classified hard/software I
simply applied well-known military security principles to the virus problem.
Thus, I am unable to present unbiased recommendations as to which product
range to choose. CSE's products are supplied as part of a "Security
Subscription", mainly to owners of large networks. This list does not seem
to be the right place to advertise products, but anybody interested in our
solutions is very welcome to email their snailmail address to me in order to
obtain further information. Please note: Because service and support is
involved we only supply markets in which we are established, i.e. most
European countries. We are willing to discuss co-operation with established
information security providers outside Europe, but not to deliver to end
users outside the area.
Peaceful Easter Greetings!
Niels
- ------------------------------------------------------------------------
- -- Niels J Bjergstrom, Ph.D. Tel. +31 70 362 2269 --
- -- Computer Security Engineers, Ltd. Fax. +31 70 365 2286 --
- -- Postbus 85 502, NL-2508 CE Den Haag London: +44 181 534 7104 --
- -- Netherlands Email: n...@csehost.knoware.nl --
- -- PGP Public key available on request - please use when mailing vira --
- ------------------------------------------------------------------------
>Using 2 leading products might raise the probability of detection from 97%
to 98 or or so but one can not >reach levels of 99.9% even with several
scanners. The lag between scanner updates and new virii
>means (all numbers approximate) >7/day * 20 days = roughly 140 of which
many likely will be missed by
>generic detection techniques.
Why do you assume that? Your argument merely illustrates that it is unsafe
to rely on scanners as the primary anti-virus defence. It does not
substantiate the notion that new vira will not be detected by
generic/heuristic methods. The difference between scanners and generic
methods is precisely that the generic methods, if correctly implemented,
require that a new virus *technique* be invented to avoid detection. It is
not important if five or 140 new vira are let loose, so long as these do not
incorporate hitherto unknown (or unforeseen) principles. Considering the
current level of virus inventiveness this involves well under 1% of the new
vira. As for the risk of encountering a new virus which is inventive *and* a
fast reproducer (much faster than the reaction time of the anti-virus
community), we are talking very small figures.
If we consider a correctly implemented modern anti-virus system with a
behaviour-checker as the first line of defence, a checksummer as the second
and a scanner as the third, a virus in order to pose a real threat must be
able to bypass two generic methods simultaneously, using hitherto unknown
techniques. Most unlikely!
>It is unclear to me how to get a situation where the files are clean and
the signatures saved are for the >uninfected file, regardless of which
virus(es) may be around now or have been present for some time.
In all questions of security, including Information Security, we operate
with probabilities, levels of risk, levels of trust. There are no absolutes,
and even the most rigorous risk analysis/risk reduction iteration will leave
a residual risk, which must be *managed*, i.e. covered by insurance,
contingency plans, etc. This is quite normal, and it is the case for
computer vira as well as for fire or theft.
An important question to ask yourself in case you have to implement security
on a network, is whether there are areas of the system (as shown by your
risk analysis) that need *better than* baseline security, e.g. with regard
to the virus risk. If so, take steps to implement better defences to cover
the particular assets and processes in that part of the system. Isolate it.
Run it on a VAX instead of on PCs. Use a diskette authorisation system that
is virus immune. Same thing as when connecting to the Internet. You probably
wish to prevent anonymous ftp to your administrative databases...:-).
The PC virus problem has been reduced from something slightly exotic six
years ago to a standard information security problem, a part of good IT
management practice. All the necessary tools are there.
>Virus writing seems to me a particularly destructive form of vandalism.
What a waste of resources, for >everyone, including the vandal.
Yes, and if those who decided to market the sub-standard operating systems
used on most PCs had shown the least bit of responsibility instead of
turning us all into their cash cows, we would not have had the problem at
all. It is ironic that good 8-bit operating systems (e.g. LDOS and NEWDOS),
which at least had started to address security issues, already existed (and
only needed to be ported) at the time MS-DOS was chosen as the operating
system of the future...! Some joke. :-(
Rgds,
Niels Bjergstrom
>Using 2 leading products might raise the probability of detection from 97%
>to 98 or or so but one can not reach levels of 99.9% even with several scanners.
more or less, yes. There are always some viruses no anti-virus producer has
a copy of. Many (but not all) of the companies regularly exchange viruses,
so once one of them gets a copy the others are sure to receive it sooner
or later...sometimes immediately, as in a recent case of a virus named
'Nightfall', sometimes a few months delayed, as in the case of 200 PS-MPC
viruses I just received at the NCSA conference.
>The lag between scanner updates and new vir means (all numbers approximate)
>7/day * 20 days = roughly 140
uh no...not at all. first, scanner producers don't add viruses right until
a few minutes before release...you have to spend a certain amount of time
on testing for false positives.
second, for many scanners there is a lot more than 20 days between updates.
third, even though a scanner producer has released a new version, he may
have hundreds of viruses sitting on the dest, awaiting analysis.
fourth, "new virus" does not at all mean that the virus will not be
detected. If I look at a batch of 100 new viruses, at least half are
usually detected with my own scanner or one of the other two I use for
comparison purposes.
>of which many likely will be missed by generic detection techniques.
well, if by "generic detection" you mean integrity checking...then
no, most will be detected, if active...if you mean heuristics, then
well...between 10 and 50%.
>that means at any given time around 1.5% are not in anyone's scanners;
a fair guess, yes.
>likely most of the new ones also will not be caught as generics,
>so I can expect there to be 1% or so always undetectable by any
>arbitrarily large collection of well maintained scanners.
>A hundred holes in security is too many by far.
Well, the point is that most/all of those 150 are not "in the wild", so they
are only theoretical holes, but if you want to get closer to 100 %, you
will have to use other methods.
- -frisk
Maybe unjustifiably.
Based on responses I've previously received, which omit numerical
values for the probabilities but indicate that scanners, by the nature
of what they are looking for, tend to scan for similar groups.
(The only ones worth scanning for are the known or generic code
sequences which can be scanned for in a usable time period. Something
like that.) I inferred that they were scanning for generic code
fragments and still missing around 3% each. Maybe I just don't
understand the terminology yet. Sorry, I'm new around here.
> to rely on scanners as the primary anti-virus defence. It does not
Yes, others who seem expert also said that, and I was summarizing
their objection to a scanner-only approach.
Also hoping for a recommendation of products (by name) which used
together fill some of each other's blind spots.
> substantiate the notion that new vira will not be detected by
> generic/heuristic methods. The difference between scanners and generic
> methods is precisely that the generic methods, if correctly implemented,
> require that a new virus *technique* be invented to avoid detection. It is
> not important if five or 140 new vira are let loose, so long as these do not
> incorporate hitherto unknown (or unforeseen) principles. Considering the
> current level of virus inventiveness this involves well under 1% of the new
> vira. As for the risk of encountering a new virus which is inventive *and* a
> fast reproducer (much faster than the reaction time of the anti-virus
> community), we are talking very small figures.
I hope you are right. I was trying to assess how many new risks are
out there at any given time. You seem to be saying it's likely (but
certainly not guaranteed) each month's batch of new viruses has 0 or
1 new attack approach (well under 1% of roughly 140). Comparing that
to a crude estimate of 140 individuals, that's much more encouraging.
I have yet to see quantitative data from
anyone that says what the risks are.
> If we consider a correctly implemented modern anti-virus system with a
> behaviour-checker as the first line of defence, a checksummer as the second
> and a scanner as the third, a virus in order to pose a real threat must be
> able to bypass two generic methods simultaneously, using hitherto unknown
> techniques. Most unlikely!
But if the virus was not detected before it hit my files, or before
I generated checksums rather, that defense is gone. It seems a weak
defense and likely to mislead.
And if a virus is not detected yet by the commercial scanner I happen
to be using, that leaves only the behaviour-checker as a defense to
detect newly introduced virii, doesn't it?
For new attack methods, the scanner doesn't yet cover them, and the
behavior checker might be weak in the area of the new attack method.
Then if such a virus gets onto my system in between times when I
generate checksums for new files, by being able to defeat the
behavior blocker, the virus is now incorporated into my checksum
information.
As you say, multiple defenses are better, but there are no guarantees.
>
> >It is unclear to me how to get a situation where the files are clean and
> the signatures saved are for the >uninfected file, regardless of which
> virus(es) may be around now or have been present for some time.
>
> In all questions of security, including Information Security, we operate
> with probabilities, levels of risk, levels of trust. There are no absolutes,
> and even the most rigorous risk analysis/risk reduction iteration will leave
> a residual risk, which must be *managed*, i.e. covered by insurance,
> contingency plans, etc. This is quite normal, and it is the case for
> computer vira as well as for fire or theft.
Well, I can pretty well determine whether my machine has caught fire
or been stolen, at any point in time. I can not determine if it is
or is not infected. This is a fundamental difference. The risks can
be numerically comparable but the nature is clearly different.
> An important question to ask yourself in case you have to implement security
> on a network, is whether there are areas of the system (as shown by your
> risk analysis) that need *better than* baseline security, e.g. with regard
> to the virus risk. If so, take steps to implement better defences to cover
> the particular assets and processes in that part of the system. Isolate it.
> Run it on a VAX instead of on PCs. Use a diskette authorisation system that
> is virus immune. Same thing as when connecting to the Internet. You probably
> wish to prevent anonymous ftp to your administrative databases...:-).
I assume you mean running on a VAX changes the odds, but does not
zero them. Same thing for any other environment.
(Viable in a corporate environment if you are willing to trade off
things like application availability and pricing, but not at home.)
(snip)
> Yes, and if those who decided to market the sub-standard operating systems
> used on most PCs had shown the least bit of responsibility instead of
> turning us all into their cash cows, we would not have had the problem at
> all. It is ironic that good 8-bit operating systems (e.g. LDOS and NEWDOS),
> which at least had started to address security issues, already existed (and
> only needed to be ported) at the time MS-DOS was chosen as the operating
> system of the future...! Some joke. :-(
Yup, we've traveled a strange path as the result of many steps which
looked rational. Take memory management (Please!)
Bill Gates understands the mass software market is like any other
consumer market. Pricing isn't the only thing but it's big.
In 1982, I saw MSDOS at $40, CP/M86 at $200, and UCSD? at $950.
Guess what I bought.
Security was a nonissue at the time. So were a lot of things.
IBM called their pc operation Entry Systems Division for a reason.
It's all us end users who let this happen.