this is what has emerged... (somewhat re-ordered)
1) define trojan (and perhaps other malware) in terms that may perhaps
be more useful than the current one(s) to facilitate/expedite sample
classification. (perhaps, instead of 'trojans' we should be talking
about 'potential trojans' since we might be able to eliminate the human
element of the classification)
2) decide on a criteria for when a malware definition should be added to
the default set of definitions scanned for by a scanner. (see my
hdformat.exe / sexyfun.exe example... at some point the sexyfun.exe
version will pose enough of a threat to mitigate the false positives
that hdformat.exe will generate...)
3) decide on a criteria for when a malware definition should be removed
from the default set of definitions scanned for by a scanner. (see bo2k
or other 'pro' RAT's...)
4) decide whether the anti-trojan tool should be mixed in with an
existing anti-virus tool (and thereby bloat the definition set) or make
them separate but combinable (like avs and firewalls are), or whether
some other division is sensible.
5) decide on a means of securely authenticating contravercial materials
by administors in order to help mitigate false alarms (ie. build a
secure 'ignore list')....
6) decide how best to train the user to make intelligent decisions based
on the output an anti-trojan tool is able to provide.
7) determine theoretical detection best practices, above and beyond
simple scan string comparison, that take into account the ways in which
trojans can be obfuscated or created using other programs. (i like the
idea of including directed change detection on key exploitable areas)
8) define the support model expected to be required for any 'solution'
to the other problems and use this to help evaluate the efficacy of said
'solution'. (if the problems a user is likely to face using the toolset
are unreasonable or unsolvable then the toolset was not well designed)
... it was nice to see lots of people had ideas on how some of these can
be solved, but the question now is can they all be solved and can the
individual solutions work together in the same anti-trojan or
anti-malware development programme - and if so, what are those solutions
and how do they fit together...
it seems to me that right now the problem we have as users is that there
are no really good anti-trojan/malware tools out there because the
people who could implement such tools face the problems outlined above -
by attacking those problem we may get rid of the barrier that stands in
the way of having our problem solved...
--
"when surveys of all the world's countries are done,
canada frequently rates number one.
are we the best country? well we'll never know...
there's nowhere else we can afford to go."
> ok, well, posts on the problems associated with anti-trojan software
> have waned so i guess that for our purposes we'll have to think of the
> list of problems as being more or less complete...
>
> this is what has emerged... (somewhat re-ordered)
>
> 1) define trojan (and perhaps other malware) in terms that may perhaps
> be more useful than the current one(s) to facilitate/expedite sample
> classification. (perhaps, instead of 'trojans' we should be talking
> about 'potential trojans' since we might be able to eliminate the human
> element of the classification)
Would you envisage this being a choosable attribute in terms of definition?
I would think that it would be very hard to eliminate the human element of
classification. There would be definite grey areas - some joke programs
for instance.
> 2) decide on a criteria for when a malware definition should be added to
> the default set of definitions scanned for by a scanner. (see my
> hdformat.exe / sexyfun.exe example... at some point the sexyfun.exe
> version will pose enough of a threat to mitigate the false positives
> that hdformat.exe will generate...)
Maybe some sort of scoring method could be used, i.e. it meets enough of
criteria xyz to be counted as a trojan.
> 3) decide on a criteria for when a malware definition should be removed
> from the default set of definitions scanned for by a scanner. (see bo2k
> or other 'pro' RAT's...)
Thes would need to be defined further to cover the scenario where a
legitimate RAT was bound to something else with the intent to trojanise.
For instance, NetBus bound to a game file.
> 4) decide whether the anti-trojan tool should be mixed in with an
> existing anti-virus tool (and thereby bloat the definition set) or make
> them separate but combinable (like avs and firewalls are), or whether
> some other division is sensible.
In my mind it makes more sense to separate them, but I know not everyone
here feels like that :)
> 5) decide on a means of securely authenticating contravercial materials
> by administors in order to help mitigate false alarms (ie. build a
> secure 'ignore list')....
Such whitelisting has been around as an idea for a long time, Nick often
mentions it. The problem, as you say, is authentication. I'm not sure that
isn't going to come up against the halting problem fairly quickly.
Any ignore list will be compromisable by sideways attacks - i.e. direct
observation leading to subversion by targeting the legit channel by which
the ignored programs are allowed.
> 6) decide how best to train the user to make intelligent decisions based
> on the output an anti-trojan tool is able to provide.
Heehee!! :)
> 7) determine theoretical detection best practices, above and beyond
> simple scan string comparison, that take into account the ways in which
> trojans can be obfuscated or created using other programs. (i like the
> idea of including directed change detection on key exploitable areas)
There are products that already do this - and glaring problems in their use
- however, in combination with more traditional methods there might be some
value in them.
> 8) define the support model expected to be required for any 'solution'
> to the other problems and use this to help evaluate the efficacy of said
> 'solution'. (if the problems a user is likely to face using the toolset
> are unreasonable or unsolvable then the toolset was not well designed)
>
> ... it was nice to see lots of people had ideas on how some of these can
> be solved, but the question now is can they all be solved and can the
> individual solutions work together in the same anti-trojan or
> anti-malware development programme - and if so, what are those solutions
> and how do they fit together...
>
> it seems to me that right now the problem we have as users is that there
> are no really good anti-trojan/malware tools out there because the
> people who could implement such tools face the problems outlined above -
> by attacking those problem we may get rid of the barrier that stands in
> the way of having our problem solved...
While all of this is certainly interesting, I don't think the problem is
solvable. The problem is essentially a social one, not a technological one.
There is relatively little you can do with technologically to mitigate
social problems. A fairly good for instance; street cameras tend to reduce
crime in their direct locality, but not overall, the crime tends to shift
to other areas, or methods to subvert them - damage/disablement/masking etc
are employed. The problem may be alleviated slightly, or its focus shifted,
but it cannot be solved by technolgy. There is only one solution,
reconstruct society on commonly agreed rules and standards of behaviour ;-)
Not very practical, but I guarantee it would work.
--
Andrew Lee | gla...@gladius.f9.org.uk \PGP:DC84 FD28 DA8A E38A A9DD|
AVIEN Founding Member |http://avien.org \ID:18A9 AFAD 5422 43F1 4C81|
// It is not certain that everything is uncertain -- Blaise Pascal
// Opinions expressed are my personal views, not those of my employer
<snip outline>
>... it was nice to see lots of people had ideas on how some of these can
>be solved, but the question now is can they all be solved and can the
>individual solutions work together in the same anti-trojan or
>anti-malware development programme - and if so, what are those solutions
>and how do they fit together...
>
>it seems to me that right now the problem we have as users is that there
>are no really good anti-trojan/malware tools out there because the
>people who could implement such tools face the problems outlined above -
>by attacking those problem we may get rid of the barrier that stands in
>the way of having our problem solved...
Why do you think there are no really good anti-trojan/malware tools in
one package out there already? It seems to me there are. It seems the
problems, except for good alert reporting and built-in removal
capabilities, are being addressed by a number of antivirus product
vendors. A small few do a good job at detection at least, IMO. But how
do you provide cleaning of endless malware which affects the registry
and several other startup areas on different versions of OS without
huge data bases?
It seems to me that what's happening now is a growing division where
some vendors are choosing to remain anti-virus and other av vendors
are pursuing the anti-malware route. I suspect that testing agencies
such as the VTC and AV-Test.org will increasingly include and
emphasize Trojan/malware detection tests (along with viruses) and
we'll see more and more av vendors no longer submitting their products
for testing at these agencies. They will become known as malware
testing agencies and the the few vendors submitting their products
will be known as anti-malware vendors.
Not that the anti-VIRUS vendors won't detect _any_ Trojans, but they
will detect only those few deemed significantly ITW.
Hmmm... IMO that's pointless. The word "potential" will be dropped very
quickly because it's much more convenient to just say "Trojan". It may
be formally more accurate to say "potential Trojan" but I doubt that
this expression will gain acceptance.
> 2) decide on a criteria for when a malware definition should be added to
> the default set of definitions scanned for by a scanner. (see my
> hdformat.exe / sexyfun.exe example... at some point the sexyfun.exe
> version will pose enough of a threat to mitigate the false positives
> that hdformat.exe will generate...)
Well, a malware definition should be added as soon as there is
significant enough possibility that the malware might cause problems.
For example, a program named sexyfun.exe that formats the hard disk
is not really dangerous if it first asks for confirmation. In that
case I wouldn't add it even though its name is clearly meant to
deceive.
A program named hdformat.exe that formats the hard disk in accordance
with its name but without asking for confirmation is rather dangerous
too in my opinion. It's maybe not strictly speaking malware but...
Oh, it's a dilemma! ;-) Seriously, there will always be some highly
controversial programs. I doubt that we will be able to solve the
problem of determining what constitutes a Trojan and what doesn't.
One could theoretically create a kind of special definition file that
includes such controversial programs and let the (experienced) user
decide whether he wants to scan for them, and also let him decide
whether a detected potential Trojan is to be considered malicious in
his particular context.
Also malware scanners should display appropriate alerts and warn that
a program *might* be malicious as opposed to bluntly labelling it as
malicious.
> 4) decide whether the anti-trojan tool should be mixed in with an
> existing anti-virus tool (and thereby bloat the definition set) or make
> them separate but combinable (like avs and firewalls are), or whether
> some other division is sensible.
It's simply more convenient to have only one scanner that scans for
several types of malware. I definitely prefer definition file bloat
to user interface bloat.
A modular set of definition files like KAV's would be great too,
so everyone is free to scan for and ignore what they want.
> 6) decide how best to train the user to make intelligent decisions based
> on the output an anti-trojan tool is able to provide.
In order for them to make intelligent decisions, the anti-Trojan tool
needs to output intelligent messages in the first place. I'm thinking
of AntiVir that reported dialers as "dialer virus" or something like
that, which got it into trouble.
Very good point. I don't think that deleting files and cleaning
registry entries would constitute a problem though. I'm sure it
can be implemented pretty easily.
You mean maybe 2 scan engines; one for just trojans?
LH
I have no idea how it is currently implemented in virus scanners that
search for Trojans, but cleaning up the mess created by Trojans should
not be an insurmountable problem.
my thinking is that a trojan or even potential trojan has to do the
damage (or whatever it is that it does to the computer) itself... there
are all manner of legitimate things that can scare users into doing
damage... so i wouldn't count joke programs as trojans or potential
trojans...
>>2) decide on a criteria for when a malware definition should be added to
>>the default set of definitions scanned for by a scanner. (see my
>>hdformat.exe / sexyfun.exe example... at some point the sexyfun.exe
>>version will pose enough of a threat to mitigate the false positives
>>that hdformat.exe will generate...)
>
>
> Maybe some sort of scoring method could be used, i.e. it meets enough of
> criteria xyz to be counted as a trojan.
indeed... but what criteria? the estimated spread of sexyfun.exe divided
by the estimated market share held by hdformat.exe?
>>3) decide on a criteria for when a malware definition should be removed
>>from the default set of definitions scanned for by a scanner. (see bo2k
>>or other 'pro' RAT's...)
>
>
> Thes would need to be defined further to cover the scenario where a
> legitimate RAT was bound to something else with the intent to trojanise.
> For instance, NetBus bound to a game file.
maybe... some have suggested not removing things at all... and my
suggestion for #1 takes care of instances where pro RAT's are usable for
illegitimate purposes... it's a potential trojan... sounds wishy, washy,
i know... the user (or administrator) would have to make the call as to
whether it was supposed to be there or not... an 'ignore list' of some
sort would help the anti-trojan's usability here...
>>4) decide whether the anti-trojan tool should be mixed in with an
>>existing anti-virus tool (and thereby bloat the definition set) or make
>>them separate but combinable (like avs and firewalls are), or whether
>>some other division is sensible.
>
>
> In my mind it makes more sense to separate them, but I know not everyone
> here feels like that :)
i think it makes sense too... some folks use security tools that match
their risk/exposure levels - they may decide that trojans are not a risk
in their environment for whatever reason...
leaving anti-trojan and anti-virus functionality separated allows those
people to design more efficient security measures... lumping everything
together seems to cater more to those who are looking for a panacea that
they don't have to worry about later...
>>5) decide on a means of securely authenticating contravercial materials
>>by administors in order to help mitigate false alarms (ie. build a
>>secure 'ignore list')....
>
>
> Such whitelisting has been around as an idea for a long time, Nick often
> mentions it. The problem, as you say, is authentication. I'm not sure that
> isn't going to come up against the halting problem fairly quickly.
> Any ignore list will be compromisable by sideways attacks - i.e. direct
> observation leading to subversion by targeting the legit channel by which
> the ignored programs are allowed.
not sure i follow...are you referring to illegitimately being added to
the ignore list, or modifying the behaviour of programs already on the
ignore list?
i admit those problems are hard to solve... but for the sake of
arguement lets consider a possibile solution... lets say for the sake of
argument that the ignorelist contains a variety of information (hash of
the program in question, hashes of the files upon which it's behaviour
depends, the commandline arguements normally used with it, etc)... lets
say that it can be stored both online (to facilitate background
scanning) and offline for a more secure usage model and as a backup for
the online one... and lets say that it's been digitally signed* by a key
that is stored offline... tampering of the ignorelist becomes detectable
and reversible, tampering of the environment that the ignorelist
contains a snapshot of also becomes detectable (reversible with backups
i suppose)...
(*digitally signed using an algorithm like RSA, such that the key for
checking the signature and the key for producing the signature are not
the same)
>>6) decide how best to train the user to make intelligent decisions based
>>on the output an anti-trojan tool is able to provide.
>
>
> Heehee!! :)
if the tool is detecting known objects, just how much can be known about
them and how much of that can go to informing the user/administrator in
an intelligent manner?
>>7) determine theoretical detection best practices, above and beyond
>>simple scan string comparison, that take into account the ways in which
>>trojans can be obfuscated or created using other programs. (i like the
>>idea of including directed change detection on key exploitable areas)
>
>
> There are products that already do this - and glaring problems in their use
> - however, in combination with more traditional methods there might be some
> value in them.
products that do which? detect trojans using something other that dumb
scanning? perform directed change detection? essentially this is the
'figure out scanning techniques that are optimized for trojans rather
than viruses' problem...
>>8) define the support model expected to be required for any 'solution'
>>to the other problems and use this to help evaluate the efficacy of said
>>'solution'. (if the problems a user is likely to face using the toolset
>>are unreasonable or unsolvable then the toolset was not well designed)
>>
>>... it was nice to see lots of people had ideas on how some of these can
>>be solved, but the question now is can they all be solved and can the
>>individual solutions work together in the same anti-trojan or
>>anti-malware development programme - and if so, what are those solutions
>>and how do they fit together...
>>
>>it seems to me that right now the problem we have as users is that there
>>are no really good anti-trojan/malware tools out there because the
>>people who could implement such tools face the problems outlined above -
>>by attacking those problem we may get rid of the barrier that stands in
>>the way of having our problem solved...
>
>
> While all of this is certainly interesting, I don't think the problem is
> solvable. The problem is essentially a social one, not a technological one.
> There is relatively little you can do with technologically to mitigate
> social problems.
it's true that technology cannot *solve* social problems, but i don't
think it's necessarily true that it cannot mitigate certain social
problems...
further, i'm not sure of the social problem to which you refer... (we
have so many)
> A fairly good for instance; street cameras tend to reduce
> crime in their direct locality,
so do street lights...
> but not overall, the crime tends to shift
> to other areas, or methods to subvert them - damage/disablement/masking etc
> are employed. The problem may be alleviated slightly, or its focus shifted,
> but it cannot be solved by technolgy.
so it *does* have an impact...
> There is only one solution,
> reconstruct society on commonly agreed rules and standards of behaviour ;-)
> Not very practical, but I guarantee it would work.
i try my best not to call peices of technology 'solutions', i know they
aren't anything of the sort... i prefer to call them tools... tools can
be used to make things better...
my above references to solutions or solved problems were in regards to
technical or logical problems, or in the final case the problem of not
having the tools for the job...
i don't think there are... i don't think there can be with the current
model of what a trojan is... how many have dropped detection of back
orifice and netbus? are they not still usable as trojans? of course they
are...
i don't think we'll see any really good efforts in anti-trojans until we
can look at code and say 'detect this' or 'don't detect this'... that's
not possible right now...
> It seems the
> problems, except for good alert reporting and built-in removal
> capabilities, are being addressed by a number of antivirus product
> vendors.
and the way they address them don't necessarily serve the good of the
user...
> A small few do a good job at detection at least, IMO. But how
> do you provide cleaning of endless malware which affects the registry
> and several other startup areas on different versions of OS without
> huge data bases?
backups... backing up the registry and all those other start up areas is
not an unreasonable thing to do... facilitating it should be a priority
for an anti-trojan developer...
> It seems to me that what's happening now is a growing division where
> some vendors are choosing to remain anti-virus and other av vendors
> are pursuing the anti-malware route.
only so far as their market demands it...
> I suspect that testing agencies
> such as the VTC and AV-Test.org will increasingly include and
> emphasize Trojan/malware detection tests
which i think is premature... determining if something belongs in the
trojan set or the malware set is an ill-defined problem at best, both
for developers and for testers... and if a tester's samples are suspect
then the tester's results are suspect...
if/when the problem of classification becomes better defined, then i
would expect to see better efforts at detection from av developers and
better, more scientific results from testing organizations... until
then, however . . .
> (along with viruses) and
> we'll see more and more av vendors no longer submitting their products
> for testing at these agencies. They will become known as malware
> testing agencies and the the few vendors submitting their products
> will be known as anti-malware vendors.
>
> Not that the anti-VIRUS vendors won't detect _any_ Trojans, but they
> will detect only those few deemed significantly ITW.
it's probably that kind of ad hoc'ery that has lead to testing
organizations testing malware detection (yet more ad hoc'ery)...
i'm not proposing an expression... they could call it quezelbub or blue
cheese... it's the concept behind it that i'm concerned with... if we
limit ourselves to looking at the functional capabilities of a thing and
come up with classifications that can be based on that information then
we're in a much better position to cope with entire classes of things...
>>2) decide on a criteria for when a malware definition should be added to
>>the default set of definitions scanned for by a scanner. (see my
>>hdformat.exe / sexyfun.exe example... at some point the sexyfun.exe
>>version will pose enough of a threat to mitigate the false positives
>>that hdformat.exe will generate...)
>
>
> Well, a malware definition should be added as soon as there is
> significant enough possibility that the malware might cause problems.
a probability based scoring method... but how do you determine the
probabilities...
> For example, a program named sexyfun.exe that formats the hard disk
> is not really dangerous if it first asks for confirmation. In that
> case I wouldn't add it even though its name is clearly meant to
> deceive.
> A program named hdformat.exe that formats the hard disk in accordance
> with its name but without asking for confirmation is rather dangerous
> too in my opinion. It's maybe not strictly speaking malware but...
if neither asked for confirmation, both would be 'potential trojans'
since one is already named sexyfun and the other could be renamed to
sexyfun...
> Oh, it's a dilemma! ;-) Seriously, there will always be some highly
> controversial programs. I doubt that we will be able to solve the
> problem of determining what constitutes a Trojan and what doesn't.
which is why i think it's a poorly defined classification from a
practical point of view... and a better set of classifications are needed...
> One could theoretically create a kind of special definition file that
> includes such controversial programs and let the (experienced) user
> decide whether he wants to scan for them, and also let him decide
> whether a detected potential Trojan is to be considered malicious in
> his particular context.
indeed... if one were to detect 'potential trojans' that seems to be
exactly how one would alert the user... though, since we're probably
talking about known binaries it should be possible to give the user help
in determining if it's malicious in his/her situation (if something has
a context in which it's legitimate, describe that context for him/her)...
> Also malware scanners should display appropriate alerts and warn that
> a program *might* be malicious as opposed to bluntly labelling it as
> malicious.
agreed...
>>4) decide whether the anti-trojan tool should be mixed in with an
>>existing anti-virus tool (and thereby bloat the definition set) or make
>>them separate but combinable (like avs and firewalls are), or whether
>>some other division is sensible.
>
>
> It's simply more convenient to have only one scanner that scans for
> several types of malware. I definitely prefer definition file bloat
> to user interface bloat.
>
> A modular set of definition files like KAV's would be great too,
> so everyone is free to scan for and ignore what they want.
so separating definition files rather than completely separate
applications... i wonder if this is possible given problem 7 (scanning
techniques optimized for trojans)... are the scanning techniques used
for detecting viruses the best techniques for detecting trojans and vice
versa?
>>6) decide how best to train the user to make intelligent decisions based
>>on the output an anti-trojan tool is able to provide.
>
>
> In order for them to make intelligent decisions, the anti-Trojan tool
> needs to output intelligent messages in the first place. I'm thinking
> of AntiVir that reported dialers as "dialer virus" or something like
> that, which got it into trouble.
i think they've all messed up on occation...
So you are suggesting a sort of bottom-up approach?
> a probability based scoring method... but how do you determine the
> probabilities...
Not probability, possibility. Obviously it's not easy to determine
whether a piece of blue cheese :-) has the ability to cause sufficient
damage to warrant detection. I think common sense is important here.
Take a look at the program's capabilities and caracteristics and
try to figure out how that program would behave on different systems,
on the assembly programmer's system, the 80-year-old-grandmother's
system, on home user systems or in corporate networks. Also consider
the market that the anti-malware program targets.
> indeed... if one were to detect 'potential trojans' that seems to be
> exactly how one would alert the user... though, since we're probably
> talking about known binaries it should be possible to give the user help
> in determining if it's malicious in his/her situation (if something has
> a context in which it's legitimate, describe that context for him/her)...
Yes - but then the description needs to be fine-tuned. After all,
two different people may be situated in the same context but may
have different opinions on the legitimacy of a piece of software.
So describing a "general" context won't suffice.
The problem with assessing the legitimacy of a piece of software
is that it requires some deliberation on the part of the user, and
a user may not necessarily be qualified to do that.
> so separating definition files rather than completely separate
> applications... i wonder if this is possible given problem 7 (scanning
> techniques optimized for trojans)... are the scanning techniques used
> for detecting viruses the best techniques for detecting trojans and vice
> versa?
KAV already uses separate definition files for viruses and (what it
considers to be) Trojans. So it's entirely possible to do it.
i don't think there's a good alternative... trojan classification is
poorly defined for the purposes at hand...
>>a probability based scoring method... but how do you determine the
>>probabilities...
>
>
> Not probability, possibility.
but possibility is a boolean quality... a boolean is our desired result
(add or don't add to the definition set) but the criteria on which it's
based need not be...
> Obviously it's not easy to determine
> whether a piece of blue cheese :-) has the ability to cause sufficient
> damage to warrant detection.
so what do you call 'sufficient'...
> I think common sense is important here.
> Take a look at the program's capabilities and caracteristics and
> try to figure out how that program would behave on different systems,
> on the assembly programmer's system, the 80-year-old-grandmother's
> system, on home user systems or in corporate networks. Also consider
> the market that the anti-malware program targets.
so then it isn't a general purpose tool?
>>indeed... if one were to detect 'potential trojans' that seems to be
>>exactly how one would alert the user... though, since we're probably
>>talking about known binaries it should be possible to give the user help
>>in determining if it's malicious in his/her situation (if something has
>>a context in which it's legitimate, describe that context for him/her)...
>
>
> Yes - but then the description needs to be fine-tuned. After all,
> two different people may be situated in the same context but may
> have different opinions on the legitimacy of a piece of software.
> So describing a "general" context won't suffice.
> The problem with assessing the legitimacy of a piece of software
> is that it requires some deliberation on the part of the user, and
> a user may not necessarily be qualified to do that.
whether or not the user is qualified to do it is ... i hazzard to say
irrelevant, but, if s/he has taken on that responsibility then they're
simply going to have to become qualified, or at least improve their
qualifications...
but generally, i think this should be much easier than you seem to
suggest... a contravercial app. has an intended legitimate purpose - the
user/administrator should be aware if they have intentionally installed
something with that purpose in mind...
>>so separating definition files rather than completely separate
>>applications... i wonder if this is possible given problem 7 (scanning
>>techniques optimized for trojans)... are the scanning techniques used
>>for detecting viruses the best techniques for detecting trojans and vice
>>versa?
>
>
> KAV already uses separate definition files for viruses and (what it
> considers to be) Trojans. So it's entirely possible to do it.
it can detect the ones it detects, yes, but ... i think lee described
this better, but, the best way to detect trojans *may* be fundamentally
different than the best way to detect viruses...
>>>it seems to me that right now the problem we have as users is that there
>>>are no really good anti-trojan/malware tools out there because the
>>>people who could implement such tools face the problems outlined above -
>>>by attacking those problem we may get rid of the barrier that stands in
>>>the way of having our problem solved...
>>
>> Why do you think there are no really good anti-trojan/malware tools in
>> one package out there already? It seems to me there are.
>
>i don't think there are... i don't think there can be with the current
>model of what a trojan is... how many have dropped detection of back
>orifice and netbus?
I just checked recently and BO is being detected by the several av I
have on hand. Dunno if there's a version of BO that isn't detected for
some reason. Based on conversations here it seems there are different
versions of Netbus and only the "Pro" version is not detected.
>are they not still usable as trojans? of course they
>are...
I guess you mean programs that can be used for malicious purposes?
>i don't think we'll see any really good efforts in anti-trojans until we
>can look at code and say 'detect this' or 'don't detect this'... that's
>not possible right now...
In your dreams :) Human judgement as to context is required. The code
of your sexyfun Trojan is the same as the code of your other "ok"
whatchacallit. I don't think you can just look at code in the case of
viruses either without a terrible false alarm problem. Viruses that
are actually detected are Trojans it seems to me. A program that
simply copies itself is replicative code and meets one criteria of
"virus" but av may well not alert since the code doesn't spread or
attach or do any harm. A judgement as to purpose or intent is
obviously made in cases of non heuristic detection. And it seems to me
heuristics are even more dangerous if not practically useless in the
case of Trojans.
>> It seems the
>> problems, except for good alert reporting and built-in removal
>> capabilities, are being addressed by a number of antivirus product
>> vendors.
>
>and the way they address them don't necessarily serve the good of the
>user...
Don't know what you mean. I think the good of the user is being served
quite well by the current anti-malware scanners.
>> A small few do a good job at detection at least, IMO. But how
>> do you provide cleaning of endless malware which affects the registry
>> and several other startup areas on different versions of OS without
>> huge data bases?
>
>backups... backing up the registry and all those other start up areas is
>not an unreasonable thing to do... facilitating it should be a priority
>for an anti-trojan developer...
I disagree. Backing up is not the responsibility of a anti-malware
scanner.
Well, it doesn't seem like a bad idea to me. What categories of
"functional capabilities" do you have in mind?
> but possibility is a boolean quality... a boolean is our desired result
> (add or don't add to the definition set) but the criteria on which it's
> based need not be...
>
> > Obviously it's not easy to determine
> > whether a piece of blue cheese :-) has the ability to cause sufficient
> > damage to warrant detection.
>
> so what do you call 'sufficient'...
I would have to give that more thought. It would be "sufficient" if
certains conditions are met, we must mull over what these conditions
are and when they are met. Perhaps the blue cheese should be capable
of causing more than just a few isolated incidents (but then you can
argue that those users affected by the isolated incidents certainly
would have appreciated protection too), or its possible legitimate uses
should substantially outweigh the number of bad things you can do with
it. For example a VBS interpreter can interpret and execute VBS viruses,
but most of the time it will interpret legitimate VBS programs, so
alerting on the VBS interpreter on the grounds of its ability to cause
damage would be silly. This is perhaps a bad example but I hope you get
the idea.
> so then it isn't a general purpose tool?
Unless you can come up with a general purpose definition of Trojan
I suppose you won't be able to come up with a general purpose
scanner...
> but generally, i think this should be much easier than you seem to
> suggest... a contravercial app. has an intended legitimate purpose - the
> user/administrator should be aware if they have intentionally installed
> something with that purpose in mind...
Provided the scanner outputs intelligent alert messages.
> > KAV already uses separate definition files for viruses and (what it
> > considers to be) Trojans. So it's entirely possible to do it.
>
> it can detect the ones it detects, yes, but ... i think lee described
> this better, but, the best way to detect trojans *may* be fundamentally
> different than the best way to detect viruses...
So you are implying that KAV's method of detecting Trojans may be,
but is not necessarily, the best? Well, KAV's method may not
necessarily be the best but it has proven to be quite effective for
detecting the programs currently in its Trojan database, and frankly
I doubt that our uncontroversial Trojans, once we have decided what
constitutes an uncontroversial Trojan, will look much different from
the programs currently in KAV's Trojan database..
I am not all that familiar with scanner design but my guts tell me
that detecting known Trojans would be less trouble than detecting
known viruses (Trojans don't replicate so there is no polymorphism
and stuff like that to deal with), while detecting new Trojans
heuristically would be more delicate than detecting new viruses (since
the behavior of a virus is generally more "apparent" and "obvious" than
that of a Trojan)...
> > Such whitelisting has been around as an idea for a long time, Nick often
> > mentions it. The problem, as you say, is authentication. I'm not sure that
> > isn't going to come up against the halting problem fairly quickly.
> > Any ignore list will be compromisable by sideways attacks - i.e. direct
> > observation leading to subversion by targeting the legit channel by which
> > the ignored programs are allowed.
>
> not sure i follow...are you referring to illegitimately being added to
> the ignore list, or modifying the behaviour of programs already on the
> ignore list?
Maybe the inhibition of the alert (note I didn't say ignore the
file) whitelist can be based on file integrity (checksum) in an
algorithm with the filename chosen by the admin. This would
make (at least some) modifications to 'whitelisted' programs
problematic for the malware writer (obscurity plus file integrity),
and the fact that the 'whitelisted' files are not 'ignored' by the
scanner means that if any were to be modified (to include add
in [aftermarket?] malware), they would now alert because the
combination of 'filename alg(checksum)-whitelisted' has now
become altered, or now includes a possible other known malware
not whitelisted with that 'filename alg(checksum)'.
> i admit those problems are hard to solve... but for the sake of
> arguement lets consider a possibile solution... lets say for the sake of
> argument that the ignorelist contains a variety of information (hash of
> the program in question, hashes of the files upon which it's behaviour
> depends, the commandline arguements normally used with it, etc)... lets
> say that it can be stored both online (to facilitate background
> scanning) and offline for a more secure usage model and as a backup for
> the online one... and lets say that it's been digitally signed* by a key
> that is stored offline... tampering of the ignorelist becomes detectable
> and reversible, tampering of the environment that the ignorelist
> contains a snapshot of also becomes detectable (reversible with backups
> i suppose)...
>
> (*digitally signed using an algorithm like RSA, such that the key for
> checking the signature and the key for producing the signature are not
> the same)
Hadn't thought of the dual key encryption, but that is
the kind of thing needed to explore here imo.
>ok, well, posts on the problems associated with anti-trojan software
>have waned so i guess that for our purposes we'll have to think of the
>list of problems as being more or less complete...
>
>this is what has emerged... (somewhat re-ordered)
>
>1) define trojan (and perhaps other malware) in terms that may perhaps
>be more useful than the current one(s) to facilitate/expedite sample
>classification. (perhaps, instead of 'trojans' we should be talking
>about 'potential trojans' since we might be able to eliminate the human
>element of the classification)
>2) decide on a criteria for when a malware definition should be added to
>the default set of definitions scanned for by a scanner. (see my
>hdformat.exe / sexyfun.exe example... at some point the sexyfun.exe
>version will pose enough of a threat to mitigate the false positives
>that hdformat.exe will generate...)
As I was just musing on a different thread, I think a lost time and
money consideration should be included in the malware definitions,
whether the program has characteristics of virus, Trojan or "other
malware". I think mere "nuisanceware" should be excluded from
detection. If a "unwanted" program is difficult to get rid of or
causes file damage then and only then should it be detected. However,
other users must be considered and the lost time and $ factor should
include the effects of others on the internet and not just the local
PC or PCs.
Art
http://www.epix.net/~artnpeg
art...@claymania.com
yes, this is very much like what i tried to describe in the section you
quoted below... i took it as a given that programs on the ignore list
would have to be integrity checked to make sure they were still the same
as when they were originally authorized...
>>i admit those problems are hard to solve... but for the sake of
>>arguement lets consider a possibile solution... lets say for the sake of
>>argument that the ignorelist contains a variety of information (hash of
>>the program in question, hashes of the files upon which it's behaviour
>>depends, the commandline arguements normally used with it, etc)... lets
>>say that it can be stored both online (to facilitate background
>>scanning) and offline for a more secure usage model and as a backup for
>>the online one... and lets say that it's been digitally signed* by a key
>>that is stored offline... tampering of the ignorelist becomes detectable
>>and reversible, tampering of the environment that the ignorelist
>>contains a snapshot of also becomes detectable (reversible with backups
>>i suppose)...
>>
>>(*digitally signed using an algorithm like RSA, such that the key for
>>checking the signature and the key for producing the signature are not
>>the same)
>
>
> Hadn't thought of the dual key encryption, but that is
> the kind of thing needed to explore here imo.
well, in order to protect against those apps being called with malicious
command line arguements, you need to store info about authorized
command line arguements and you have to check them at runtime, so the
ignore list has to be accessible during normal operation... in order to
protect the ignore list you need some sort of authentication that a
peice of malware (known or unknown) cannot forge... ergo, a public key
authentication system where the private key is stored offline and is
inaccessible to the malware...
then the problem is how to authorize changes to the ignore list without
letting the malware see the private key - that requires some kind of
clean boot technique/technology... the app can save proposed changes to
the ignore list that are subject to review and authorization during a
clean boot...
good question... what kinds of things do trojans do?...
1) file system corruption / disk overwriting
2) file corruption
3) acting as a remote command server (executing commands from remote
locations)
4) transmitting keystrokes / credit info / passwords / files (in general
any form of data that could contain sensitive personal information) to
remote sites
5) denying access to files, applications, or network resources that one
would otherwise have access to
6) retrieving and/or installing something to do one or more of the
former things (droppers for want of a better term)
>>but possibility is a boolean quality... a boolean is our desired result
>>(add or don't add to the definition set) but the criteria on which it's
>>based need not be...
>>
>>
>>>Obviously it's not easy to determine
>>>whether a piece of blue cheese :-) has the ability to cause sufficient
>>>damage to warrant detection.
>>
>>so what do you call 'sufficient'...
>
>
> I would have to give that more thought. It would be "sufficient" if
> certains conditions are met, we must mull over what these conditions
> are and when they are met. Perhaps the blue cheese should be capable
> of causing more than just a few isolated incidents (but then you can
> argue that those users affected by the isolated incidents certainly
> would have appreciated protection too), or its possible legitimate uses
> should substantially outweigh the number of bad things you can do with
> it. For example a VBS interpreter can interpret and execute VBS viruses,
> but most of the time it will interpret legitimate VBS programs, so
> alerting on the VBS interpreter on the grounds of its ability to cause
> damage would be silly. This is perhaps a bad example but I hope you get
> the idea.
or maybe one should just add everything that meets the functional
requirements of what one is looking for, and develop a product that has
more than one 'sensitivity' setting, so that the highest sensitivity
setting would alert on everything one knows about, and lower settings
would only alert on things which are more likely to cause widespread
problems...
but then maybe that has problems of it's own, like how is a
user/administrator supposed to know what sensitivity setting is best for
his/her environment...
how do you quantify the risk that is posed by a contraversial app? anybody?
[snip]
>>but generally, i think this should be much easier than you seem to
>>suggest... a contravercial app. has an intended legitimate purpose - the
>>user/administrator should be aware if they have intentionally installed
>>something with that purpose in mind...
>
>
> Provided the scanner outputs intelligent alert messages.
one can only hope...
>>>KAV already uses separate definition files for viruses and (what it
>>>considers to be) Trojans. So it's entirely possible to do it.
>>
>>it can detect the ones it detects, yes, but ... i think lee described
>>this better, but, the best way to detect trojans *may* be fundamentally
>>different than the best way to detect viruses...
>
>
> So you are implying that KAV's method of detecting Trojans may be,
> but is not necessarily, the best?
not exactly... i'm sure that kav's method of detecting the set of
non-replicating malware that it detects is perfectly good... i'm
suggesting that that set may not be representative of trojans in general
and that method may not scale well to an arbitrary superset of what it
currently works on...
> Well, KAV's method may not
> necessarily be the best but it has proven to be quite effective for
> detecting the programs currently in its Trojan database, and frankly
> I doubt that our uncontroversial Trojans, once we have decided what
> constitutes an uncontroversial Trojan, will look much different from
> the programs currently in KAV's Trojan database..
maybe...
> I am not all that familiar with scanner design but my guts tell me
> that detecting known Trojans would be less trouble than detecting
> known viruses (Trojans don't replicate so there is no polymorphism
> and stuff like that to deal with),
i wouldn't count the possibility of polymorphism (or something
equivalent) out so quickly... runtime compression/encryption being what
it is, not to mention the possibility of running the source through some
kind of generator that produces functionally equivalent but bytewise
different code, i think a problem like polymorphism is entirely possible...
even the pro version (bo2k)?
> Dunno if there's a version of BO that isn't detected for
> some reason. Based on conversations here it seems there are different
> versions of Netbus and only the "Pro" version is not detected.
and yet it can still be used as a trojan...
>>are they not still usable as trojans? of course they
>>are...
>
>
> I guess you mean programs that can be used for malicious purposes?
yes...
>>i don't think we'll see any really good efforts in anti-trojans until we
>>can look at code and say 'detect this' or 'don't detect this'... that's
>>not possible right now...
>
>
> In your dreams :) Human judgement as to context is required. The code
> of your sexyfun Trojan is the same as the code of your other "ok"
> whatchacallit.
exactly why trojan is such a poorly defined class...
> I don't think you can just look at code in the case of
> viruses either without a terrible false alarm problem.
no, you can... the virus classification is based on an entirely
functional definition of virus... the definition of trojan is not a
functional one (in that it is not solely defined by the functions it
must possess)... the function of an object can be determined by looking
at the code, but user expectations cannot be determined that way and
user expectations are an integral part of the definition of trojan horse
programs...
so long as we're saddled with a classification based on user
expectations we won't be able to implement a rigorous/thorough treatment
of that class of objects...
> Viruses that
> are actually detected are Trojans it seems to me.
some viruses could be considered trojans, yes... but not all... some
viruses might inform the user of it's actions and request permission
before proceeding... it's still a virus, but none of it's behaviour is
unknown or unexpected by the user so the user is not deceived...
> A program that
> simply copies itself is replicative code and meets one criteria of
> "virus" but av may well not alert since the code doesn't spread or
> attach or do any harm.
if av's don't detect self-replicating code that they know about, then
they've made a mistake...
> A judgement as to purpose or intent is
> obviously made in cases of non heuristic detection. And it seems to me
> heuristics are even more dangerous if not practically useless in the
> case of Trojans.
i don't know off hand how one would implement heuristic detection of
trojans, but i don't think it would be all that much worse than
heuristic detection of viruses...
>>>It seems the
>>>problems, except for good alert reporting and built-in removal
>>>capabilities, are being addressed by a number of antivirus product
>>>vendors.
>>
>>and the way they address them don't necessarily serve the good of the
>>user...
>
>
> Don't know what you mean. I think the good of the user is being served
> quite well by the current anti-malware scanners.
you mean the ones that specifically removed detection for remote access
'tool's (not because the functionality of those tools changed but
because the way they were marketed did)?
>>>A small few do a good job at detection at least, IMO. But how
>>>do you provide cleaning of endless malware which affects the registry
>>>and several other startup areas on different versions of OS without
>>>huge data bases?
>>
>>backups... backing up the registry and all those other start up areas is
>>not an unreasonable thing to do... facilitating it should be a priority
>>for an anti-trojan developer...
>
>
> I disagree. Backing up is not the responsibility of a anti-malware
> scanner.
i didn't say it was the responsibility of the scanner, i said
facilitating it should be a priority for the developer... add-on tools
for generic recovery, much like some of the tools that were available in
thunderbyte anti-virus, back in the day...
i hope you were referring to item 2 above, rather than 1... taking those
into conseration when deciding whether or not to add detection for a
certain thing i can see, but not when classifying that thing in the
first place..
> whether the program has characteristics of virus, Trojan or "other
> malware". I think mere "nuisanceware" should be excluded from
> detection. If a "unwanted" program is difficult to get rid of or
> causes file damage then and only then should it be detected. However,
> other users must be considered and the lost time and $ factor should
> include the effects of others on the internet and not just the local
> PC or PCs.
i'm sure this is going to be a contraversial point because it's so hard
to estimate those quantities (and so hard to justify those estimates
after the fact)...
Wouldn't it make sense to regroup 1) and 2) in a single category,
"data corruption"?
> 3) acting as a remote command server (executing commands from remote
> locations)
> 4) transmitting keystrokes / credit info / passwords / files (in general
> any form of data that could contain sensitive personal information) to
> remote sites
As far as 3 is concerned, it certainly is a distinct feature but
again, wouldn't it make sense to dissolve 3 into "data corruption"
as suggested above and 4), depending on what the consequences of it
acting as a remote server are...?
> 5) denying access to files, applications, or network resources that one
> would otherwise have access to
> 6) retrieving and/or installing something to do one or more of the
> former things (droppers for want of a better term)
Maybe 7) deception?
Some Trojans could do their harm without attempting to veil their
doing, or they could simply display a fake error message. Maybe
a seventh category could include any attempts to explicitly deceive
the user. I'm thinking of a program that pretends to be a [insert your
favorite pop star here] screen saver (and acts like one to a certain
extent) and then does something else.
Or what about those greeting cards with very well hidden or even absent
license agreements, or retaliatory actions of a program when you enter
a cracked/invalid serial number? (Though stricly speaking the latter
may not count as deception.)
> or maybe one should just add everything that meets the functional
> requirements of what one is looking for, and develop a product that has
> more than one 'sensitivity' setting, so that the highest sensitivity
> setting would alert on everything one knows about, and lower settings
> would only alert on things which are more likely to cause widespread
> problems...
>
> but then maybe that has problems of it's own, like how is a
> user/administrator supposed to know what sensitivity setting is best for
> his/her environment...
Checkboxes - let the user decide what he wants to scan for. This could
easily be implemented with modular def files.
> i wouldn't count the possibility of polymorphism (or something
> equivalent) out so quickly... runtime compression/encryption being what
> it is, not to mention the possibility of running the source through some
> kind of generator that produces functionally equivalent but bytewise
> different code, i think a problem like polymorphism is entirely possible...
Hmmm... this deserves further consideration - later. ;-)
> > Maybe the inhibition of the alert (note I didn't say ignore the
> > file) whitelist can be based on file integrity (checksum) in an
> > algorithm with the filename chosen by the admin. This would
> > make (at least some) modifications to 'whitelisted' programs
> > problematic for the malware writer (obscurity plus file integrity),
> > and the fact that the 'whitelisted' files are not 'ignored' by the
> > scanner means that if any were to be modified (to include add
> > in [aftermarket?] malware), they would now alert because the
> > combination of 'filename alg(checksum)-whitelisted' has now
> > become altered, or now includes a possible other known malware
> > not whitelisted with that 'filename alg(checksum)'.
>
> yes, this is very much like what i tried to describe in the section you
> quoted below... i took it as a given that programs on the ignore list
> would have to be integrity checked to make sure they were still the same
> as when they were originally authorized...
This would all be integral to a properly implemented whitelisting-only
scheme and such a scheme happily sidesteps the issue of "what is a Trojan"
by providing a fundamentally better "solution" than current blacklisting
approaches -- it allows that which has been authorized to run to run and
prevents everything else. Under a whitelisting approach, the issue of
whether something is good, bad, viral, Trojanic or just undetermined is
irrelevant. The question at issue is "has a suitable authority cleared
this code for execution (by this user) on this machine?".
Of course, this approach will not work with typical (non-corporate) users
as the whole "problem" you are discussing here stems from the simple
observation that very few people are actually prepared to take proper
responsibilty for the code they choose to run on their machines and given
that they will not be prepared to expend the (small) effort required to
maintain the whitelist authorization mechanism.
<<snip>>
> well, in order to protect against those apps being called with malicious
> command line arguements, you need to store info about authorized
> command line arguements and you have to check them at runtime, so the
> ignore list has to be accessible during normal operation... in order to
> protect the ignore list you need some sort of authentication that a
> peice of malware (known or unknown) cannot forge... ergo, a public key
> authentication system where the private key is stored offline and is
> inaccessible to the malware...
Such "potentially dangerous but also useful" applications should only be
authorized to be run by suitably "qualified" users. This is why a simple
whitelisting "bolt-on" is not as easy to do for Win9x/ME OSes as it would
be for NT-based OSes or for non-Windows OSes that have native multi-user
capabilities.
> then the problem is how to authorize changes to the ignore list without
> letting the malware see the private key - that requires some kind of
> clean boot technique/technology... the app can save proposed changes to
> the ignore list that are subject to review and authorization during a
> clean boot...
Under a full-run whitelisting scheme, this is irrelevant, as no
"questionable" code should be able to be running anyway. If it is, that
means the whitelist authorizer has slipped up which means the system is
screwed anyway independent of the whitelisting system.
--
Nick FitzGerald
> > 1) define trojan (and perhaps other malware) in terms that may perhaps
> > be more useful than the current one(s) to facilitate/expedite sample
> > classification. (perhaps, instead of 'trojans' we should be talking
> > about 'potential trojans' since we might be able to eliminate the human
> > element of the classification)
>
> Hmmm... IMO that's pointless. The word "potential" will be dropped very
> quickly because it's much more convenient to just say "Trojan". It may
> be formally more accurate to say "potential Trojan" but I doubt that
> this expression will gain acceptance.
Agreed.
Also, much as Art seems to dislike such things, you should note an
interesting and not altogether irrelevant result of theoretical
CompSci research here. _Any_ string of bits is a virus on some
potential Turing machine. Equally, _any_ string of bits will be
considered Trojanic by some user of some potential Turing machine.
Thus PERFECT.BAT -- with the suitable addition of the string
"potential " in the right part of its reporting function -- would be
just the thing some of you asking for...
<<Snip -- it seems Frederic and I are in broad agreement>>
> > 6) decide how best to train the user to make intelligent decisions based
> > on the output an anti-trojan tool is able to provide.
>
> In order for them to make intelligent decisions, the anti-Trojan tool
> needs to output intelligent messages in the first place. I'm thinking
> of AntiVir that reported dialers as "dialer virus" or something like
> that, which got it into trouble.
8-)
One of my favourite themes...
Of course, some of these developers will not change their reporting
formats as they have large corporate customers with complex report-
parsing scripts automatically analysing the increasingly jumbled,
increasingly nonsensical garbage that pours out of their scanners.
Ignoring the fact that the messages their scanners provide to the
_end-user_ are unintelligible and suggest that the developers and
malware analysts at their companies must be drug-crazed morons to have
coded and kept using such rubbish is par for the course in this
industry segment, so don't expect any change or improvement here...
--
Nick FitzGerald
<snip>
As a home user of Win 98 not on a LAN my "whitelist reference" is a
cloned hard drive that I back up to weekly. AV scanners are a
important part of my checks to make pretty damn sure that I'm not
backing up malicious code. I don't see any way around anti-malware
scanners as useful prevention aids. That's why I give credit to those
vendors that address the issues in spite of the lack of clear
definitions.
Art
http://www.epix.net/~artnpeg
art...@claymania.com
>Ignoring the fact that the messages their scanners provide to the
>_end-user_ are unintelligible and suggest that the developers and
>malware analysts at their companies must be drug-crazed morons to have
>coded and kept using such rubbish is par for the course in this
>industry segment, so don't expect any change or improvement here...
Here's a hoping that this "writer" can come up with something better
via his existential haze - and more on this later! Thanks for this
lead in Nick... ,o)
Noho ora mai,
Brian
<snip>
> > A program that
> > simply copies itself is replicative code and meets one criteria of
> > "virus" but av may well not alert since the code doesn't spread or
> > attach or do any harm.
>
>if av's don't detect self-replicating code that they know about, then
>they've made a mistake...
What I had in mind is the batch file discussed in other threads here
recently:
@ECHO OFF
COPY %0 A.COM>NUL
I checked with five good DOS scanners and none alert on this virus or
the A.COM it generates. I presume it's well known replicative code. I
suspect there are many similar examples. The batch is also a fragment
of batman which av do alert on.
> >>and the way they address them don't necessarily serve the good of the
> >>user...
> >
> > Don't know what you mean. I think the good of the user is being served
> > quite well by the current anti-malware scanners.
>
>you mean the ones that specifically removed detection for remote access
>'tool's (not because the functionality of those tools changed but
>because the way they were marketed did)?
You'll have this in a imperfect illogical and political world :)
[snip]
> > Well, it doesn't seem like a bad idea to me. What categories of
> > "functional capabilities" do you have in mind?
>
> good question... what kinds of things do trojans do?...
> 1) file system corruption / disk overwriting
> 2) file corruption
> 3) acting as a remote command server (executing commands from remote
> locations)
> 4) transmitting keystrokes / credit info / passwords / files (in general
> any form of data that could contain sensitive personal information) to
> remote sites
> 5) denying access to files, applications, or network resources that one
> would otherwise have access to
> 6) retrieving and/or installing something to do one or more of the
> former things (droppers for want of a better term)
Undermines the desired security paradigm.
('desire' is unfortunately the troublesome human factor)
One RAT can cause all of the other effects you mentioned.
The fact remains that it is the *administrator's* desired
security paradigm, not necessarily the users', that is the
important fact to consider. If the 'user' is tricked into
using the program that is one thing, but if the administrator
is the one being tricked, it becomes malware.
Since most administrators aren't even aware that they *are*
administrators, I don't see how they can be forced to get a
clue. I think the 'whitelisting' option is the only workable
method, but it requires *real* administration of the computer
in order to work.
...and what are the chances of *that* happening!?
How costly is a data leak? It seems likely to me that
some companies would prefer the system crash and
burn, than to allow data leakage. I don't think that a
weighting system for malware is a consideration for
deciding whether or not to include detection.
A security compromise is a bad thing no matter what
the offending program is capable of doing (as written).
> > whether the program has characteristics of virus, Trojan or "other
> > malware". I think mere "nuisanceware" should be excluded from
> > detection.
"nuisanceware" can cause major problems for time critical
applications. They can also have coding errors which allow
them to be further misused by another program. Why post
an armed guard at the front entrance, and leave the side
entrance unguarded?
>3) decide on a criteria for when a malware definition should be removed
>from the default set of definitions scanned for by a scanner. (see bo2k
>or other 'pro' RAT's...)
Only when it stops being malicious, i.e. capable of being
stealth-installed, for starters.
>it seems to me that right now the problem we have as users is that there
>are no really good anti-trojan/malware tools out there because the
>people who could implement such tools face the problems outlined above -
>by attacking those problem we may get rid of the barrier that stands in
>the way of having our problem solved...
Or they are all just waiting for Palladium before doing any great
reinventions, as that will change everything again.
>---------- ----- ---- --- -- - - - -
[x] Always trust Microsoft
>---------- ----- ---- --- -- - - - -
point of fact, no, that is not the question at issue... not when we're
talking about trojans...
the white-list method, strong though it may be, is not a panacea, it
does not solve all problems... it addresses the issue of unauthorized
execution... unauthorized execution is, in general, not at issue when
we're talking about trojans... when we're talking about trojans the
issue is making bad authorization decisions, which a white-list cannot
help us with...
if a user knew the a particular file was a trojan, do you think he'd try
to run it? no of course not, he probably wouldn't even bother
downloading it... but since he doesn't know that he'll happily download
it and equally happily ignore the white-list alert that can say little
more than "this is an unknown program"...
> Of course, this approach will not work with typical (non-corporate) users
> as the whole "problem" you are discussing here stems from the simple
> observation that very few people are actually prepared to take proper
> responsibilty for the code they choose to run on their machines and given
> that they will not be prepared to expend the (small) effort required to
> maintain the whitelist authorization mechanism.
that is not a fair characterization.. in fact, i think you're not giving
credit to the complexity of problem...
security (all security) comes down to one quintessential concept...
*trust*... establishing trust and enforcing trust boundaries throughout
a system... a white-list can be very effective at enforcing those trust
boundaries but it falls down completely when it comes to establishing
trust in the first place (ie. deciding what to trust when adding items
to the white-list - something that would invariably need to occur)...
arguably, establishing trust is the harder (and more significant)
problem (if we could do that then would we need a white-list application
in the first place?)...
where the white-list falls down, the black-list (and in my suggestions
in this thread, the 'grey-list') does not... the trojan is a trojan
because the user is basing his/her evaluation of trust on false
information and the black-list/grey-list represents a knowledge base
that has the potential to correct the user's false impression and
thereby facilitate better judgement...
> <<snip>>
>
>>well, in order to protect against those apps being called with malicious
>>command line arguements, you need to store info about authorized
>>command line arguements and you have to check them at runtime, so the
>>ignore list has to be accessible during normal operation... in order to
>>protect the ignore list you need some sort of authentication that a
>>peice of malware (known or unknown) cannot forge... ergo, a public key
>>authentication system where the private key is stored offline and is
>>inaccessible to the malware...
>
>
> Such "potentially dangerous but also useful" applications should only be
> authorized to be run by suitably "qualified" users.
agreed... but that isn't so much a malware issue as it is a user rights
issue... (ie. trust boundaries pertaining to wetware entities rather
than software entities)
> This is why a simple
> whitelisting "bolt-on" is not as easy to do for Win9x/ME OSes as it would
> be for NT-based OSes or for non-Windows OSes that have native multi-user
> capabilities.
indeed...
>>then the problem is how to authorize changes to the ignore list without
>>letting the malware see the private key - that requires some kind of
>>clean boot technique/technology... the app can save proposed changes to
>>the ignore list that are subject to review and authorization during a
>>clean boot...
>
>
> Under a full-run whitelisting scheme, this is irrelevant, as no
> "questionable" code should be able to be running anyway. If it is, that
> means the whitelist authorizer has slipped up which means the system is
> screwed anyway independent of the whitelisting system.
you make the question of what to authorize and what not to authorize
sound trivial... we both know it isn't... and this problem is not
independant of the whitelisting system... a whitelisting system needs to
allow users/administrators to add items to the whitelist, the system
would be useless without that... you cannot ignore the problem of
establishing trust levels when you're enforcing trust boundaries... it
is a cart without a horse to pull it...
the above snippet of code is not a virus... it does not recursively
self-replicate (it's offspring do not replicate, ergo they are not
viruses, and if none of the offspring are viruses then neither is the
parent)...
since it is not a virus, i wouldn't expect anti-virus products to detect
it...
> I presume it's well known replicative code. I
> suspect there are many similar examples. The batch is also a fragment
> of batman which av do alert on.
batman likely contains additional code to make the above into a virus...
number 3 is the rat... and yes, it can do all of the other things, if
the commands it accepts are powerful enough...
> The fact remains that it is the *administrator's* desired
> security paradigm, not necessarily the users', that is the
> important fact to consider. If the 'user' is tricked into
> using the program that is one thing, but if the administrator
> is the one being tricked, it becomes malware.
the administrator is the one who defines what good and bad results are,
but the user can be tricked into using apps that have bad results...
> Since most administrators aren't even aware that they *are*
> administrators, I don't see how they can be forced to get a
> clue. I think the 'whitelisting' option is the only workable
> method, but it requires *real* administration of the computer
> in order to work.
>
> ....and what are the chances of *that* happening!?
it requires more than that... it requires the magical ability to know
what programs actually do, rather than just what they're supposed to do...
i suppose you could...
>>3) acting as a remote command server (executing commands from remote
>>locations)
>>4) transmitting keystrokes / credit info / passwords / files (in general
>>any form of data that could contain sensitive personal information) to
>>remote sites
>
>
> As far as 3 is concerned, it certainly is a distinct feature but
> again, wouldn't it make sense to dissolve 3 into "data corruption"
definitely not... opening up a system to accept commands from a remote
location is nothing like data corruption... RAT's aren't data diddlers,
per se, they are a way for third parties to take over control of your
system, to steal your data, and to use your system for a variety of
things (not the least of which being to impersonate you at the network
level in a DoS or DDoS)...
> as suggested above and 4), depending on what the consequences of it
> acting as a remote server are...?
4 constitutes a passive sniffer, while 3 represents and active server...
>>5) denying access to files, applications, or network resources that one
>>would otherwise have access to
>>6) retrieving and/or installing something to do one or more of the
>>former things (droppers for want of a better term)
>
>
> Maybe 7) deception?
no... for 2 reasons... you don't decieve computers (we're talking about
actions a particular program does to a computer), and because you can't
define deception as an algorithm (because it's not something you do to
computers)...
deception would invalidate the effort to remove the human element from
the equation and it would turn it into a definition of trojan, which we
already know is not a class we can easily deal with...
> Some Trojans could do their harm without attempting to veil their
> doing, or they could simply display a fake error message. Maybe
> a seventh category could include any attempts to explicitly deceive
> the user.
all trojans deceive the user... that's part of the definition of trojan
horse programs...
if the people of troy had known there were enemy agents hidden in the
belly of the giant wooden horse they would have never brought it into
the city...
>>>if av's don't detect self-replicating code that they know about, then
>>>they've made a mistake...
>>
>> What I had in mind is the batch file discussed in other threads here
>> recently:
>>
>> @ECHO OFF
>> COPY %0 A.COM>NUL
>>
>> I checked with five good DOS scanners and none alert on this virus or
>> the A.COM it generates.
>
>the above snippet of code is not a virus... it does not recursively
>self-replicate (it's offspring do not replicate, ergo they are not
>viruses, and if none of the offspring are viruses then neither is the
>parent)...
It doesn't meet a fully qualified definition of virus, but it does
meet the "self replicating" _essential_ portion of a fully qualified
definition. You had loosely used the unqualifed definition when you
said "if avs don't detect self-replicating code that they know about
then they've made a mistake ...". So I wanted to get your reaction to
the silly little batch that creates a single copy of itself :) I agree
that av should not alert on it as a virus.
But I wonder what you think about alerting on it as a Trojan. The
A.COM it creates is unterminated code which leads to unpredictable
results. The results include hangs, reboots, and ... who knows what?
If the the batch is presented as doing something worthwhile or good,
is it a Trojan since it might cause problems instead? But if so, do
you call all unterminated code a Trojan?
How does your Trojan algorithm know how the batch was presented to the
user? When you speak of coming up with a clear definition of Trojan
that can be implemented algorithmically, I immediately wonder how the
algorithm knows how the code was presented to the user :)
Now, most av don't seem to think the batch is a Trojan either since
they don't alert. Is it mere nuisanceware not worthy of a Trojan
alert? Symantec seems to consider the thing a Trojan and some
versions of NAV alert on it (under odd circumstances which are in fact
false alarms).
I'm sorry for beating this example to death since it's been discussed
in other threads. But in my mind, it's important to your/our purpose
in this thread. If something as relatively simple as the batch example
is cause for debate, we've got a lot of work cut out for us :)
agreed... that's the way i think it should be done too...
>>it seems to me that right now the problem we have as users is that there
>>are no really good anti-trojan/malware tools out there because the
>>people who could implement such tools face the problems outlined above -
>>by attacking those problem we may get rid of the barrier that stands in
>>the way of having our problem solved...
>
>
> Or they are all just waiting for Palladium before doing any great
> reinventions, as that will change everything again.
well, if we're very, very lucky, palladium will go nowhere fast... or
microsoft will lose so much market share as a result of palladium that
they're forced to drop it...
one of the worst solutions to security questions, that i can think of,
is to take answering those questions out of the hands of the users
entirely and put them in the hands of not nearly disinterested third
parties...
depends on the data...
> It seems likely to me that
> some companies would prefer the system crash and
> burn, than to allow data leakage.
yes, that's reasonable...
> I don't think that a
> weighting system for malware is a consideration for
> deciding whether or not to include detection.
well, i would agree that cost is hard to quantify, and the estimates are
hard to justify, but that doesn't stop some people from doing it and i
can see how some people might think it's a good way to decide what
malware is the most important to detect....
> A security compromise is a bad thing no matter what
> the offending program is capable of doing (as written).
very true... and to be honest, my preference would be to not base the
decision to detect on cost, but that's just a preference...
>> > whether the program has characteristics of virus, Trojan or "other
>> > malware". I think mere "nuisanceware" should be excluded from
>> > detection.
>
>
> "nuisanceware" can cause major problems for time critical
> applications. They can also have coding errors which allow
> them to be further misused by another program. Why post
> an armed guard at the front entrance, and leave the side
> entrance unguarded?
hmmm... that's something i didn't list when fredrick asked what type of
functionality a classification should be based on... i listed denying
access to resources but not disrupting access to resources (one is
persistent, while the other isn't)... i think that was an oversight...
> > The fact remains that it is the *administrator's* desired
> > security paradigm, not necessarily the users', that is the
> > important fact to consider. If the 'user' is tricked into
> > using the program that is one thing, but if the administrator
> > is the one being tricked, it becomes malware.
>
> the administrator is the one who defines what good and bad results are,
> but the user can be tricked into using apps that have bad results...
I was thinking that the case might arise that the 'administrator'
must trick the user into installing the 'tool' (which is stealthfully
installed, and pre-whitelisted not to display detection). In this
scenario, it is still a 'tool' even though it is a stealth installer,
and perhaps even in a 'social engineering' package.
> > Since most administrators aren't even aware that they *are*
> > administrators, I don't see how they can be forced to get a
> > clue. I think the 'whitelisting' option is the only workable
> > method, but it requires *real* administration of the computer
> > in order to work.
> >
> > ....and what are the chances of *that* happening!?
>
> it requires more than that... it requires the magical ability to know
> what programs actually do, rather than just what they're supposed to do...
"Any technology, sufficiently advanced, is indistinguishable from magic."
Arthur C. Clarke ~ iirc
...so I guess we'll have to push technology in order to make this work.
People can't be bothered to maintain a 'whitelist' based system,
so they will be stuck with TCPA/Palladium, and be spoon fed
their security.
It also depends on the value of the data. What I mean is that
even if no data has leaked, when the compromise has been
discovered you can't be certain that no data has been leaked.
Consider the recent credit card information leak, even if no
hacker has actually stole any information, the security *was*
compromised and it must be assumed that the information
was leaked, and appropriate measures taken.
So, the 'nuisance' program has proven that security has
been breached (unless you decide not to include detection).
> > It seems likely to me that
> > some companies would prefer the system crash and
> > burn, than to allow data leakage.
>
> yes, that's reasonable...
>
> > I don't think that a
> > weighting system for malware is a consideration for
> > deciding whether or not to include detection.
>
> well, i would agree that cost is hard to quantify, and the estimates are
> hard to justify, but that doesn't stop some people from doing it and i
> can see how some people might think it's a good way to decide what
> malware is the most important to detect....
True enough, so the anti-malware vendors can decide how to
deal with people who say that they dropped Malware Blocker
2010 because it didn't detect some program that Horse Hobbler
Deluxe stopped cold.
> > A security compromise is a bad thing no matter what
> > the offending program is capable of doing (as written).
>
> very true... and to be honest, my preference would be to not base the
> decision to detect on cost, but that's just a preference...
I'm at a loss to find any good criteria with which to base
such a decision. The number of reported incidents that
involved a certain program? Joke programs might be high
on that list. Those that provide dDoS capabilities should
*always* be detected, and of course all RATs and logic
bombs...the hard part is excluding some things thought
to be trivial, but are in fact beachheads.
> one of the worst solutions to security questions, that i can think of,
> is to take answering those questions out of the hands of the users
> entirely and put them in the hands of not nearly disinterested third
> parties...
It makes sense to base security in hardware, and of course
the software will have to change to support it. But then the
whole thing becomes dependent on the internet (and on the
BigBrother syndrome), not a good idea imo.
this may seem like semantics, but... if the offspring can't do what the
parent does, it isn't a replica of the parent... ergo the parent has not
self-replicated...
> But I wonder what you think about alerting on it as a Trojan.
i would think it's just as silly... it doesn't harm your date or expose
it to third parties, it doesn't make your system open to outside
control, it doesn't even do anything to degrade your performance...
> The
> A.COM it creates is unterminated code which leads to unpredictable
> results. The results include hangs, reboots, and ... who knows what?
> If the the batch is presented as doing something worthwhile or good,
> is it a Trojan since it might cause problems instead? But if so, do
> you call all unterminated code a Trojan?
no, i wouldn't... generally the probability of bytes left over in memory
that get executed in such an instance actually forming something that
can do damage are very slim...
> How does your Trojan algorithm know how the batch was presented to the
> user?
there is no algorithm for that... that's the point of trying to remove
the human element from the equation...
> When you speak of coming up with a clear definition of Trojan
> that can be implemented algorithmically, I immediately wonder how the
> algorithm knows how the code was presented to the user :)
it can't, which is why the current definition is so poor... i've since
refined what i suggested so as to mean coming up with a new
classification (rather than redefining trojan) that would include the
undesirable capabilities of trojans but be entirely context blind...
> Now, most av don't seem to think the batch is a Trojan either since
> they don't alert. Is it mere nuisanceware not worthy of a Trojan
> alert? Symantec seems to consider the thing a Trojan and some
> versions of NAV alert on it (under odd circumstances which are in fact
> false alarms).
>
> I'm sorry for beating this example to death since it's been discussed
> in other threads. But in my mind, it's important to your/our purpose
> in this thread. If something as relatively simple as the batch example
> is cause for debate, we've got a lot of work cut out for us :)
to my mind, the batch file itself is not a trojan, nor is it even a
'potential trojan'... it does nothing destructive and it does nothing
useful - i'd call it garbage...
the *.com file it creates is a corrupt program, no more, no less...
yes? and?
you yourself have just pointed out that it is something that should be
detectable, but also 'authorizable' in an anti-trojan package... it
seems to me like you've already answered any questions about the scenario...
>>>Since most administrators aren't even aware that they *are*
>>>administrators, I don't see how they can be forced to get a
>>>clue. I think the 'whitelisting' option is the only workable
>>>method, but it requires *real* administration of the computer
>>>in order to work.
>>>
>>>....and what are the chances of *that* happening!?
>>
>>it requires more than that... it requires the magical ability to know
>>what programs actually do, rather than just what they're supposed to do...
>
>
> "Any technology, sufficiently advanced, is indistinguishable from magic."
> Arthur C. Clarke ~ iirc
>
> ....so I guess we'll have to push technology in order to make this work.
>
> People can't be bothered to maintain a 'whitelist' based system,
> so they will be stuck with TCPA/Palladium, and be spoon fed
> their security.
white lists are a good idea, and they should be usable, but if the
active content of your system changes frequently administrating one
would be problematic...
the value of the data depends on the data...
> What I mean is that
> even if no data has leaked, when the compromise has been
> discovered you can't be certain that no data has been leaked.
> Consider the recent credit card information leak, even if no
> hacker has actually stole any information, the security *was*
> compromised and it must be assumed that the information
> was leaked, and appropriate measures taken.
>
> So, the 'nuisance' program has proven that security has
> been breached (unless you decide not to include detection).
i think you're confused... a nuisance program and a program that can
leak information are two different things... if you have a word
processor, do you worry about it repartitioning your drive? no of course
not...
if a company is afraid that some nuisance program has done more than
just be a nuisance they obviously don't know enough about the program to
base any conclusions and need to do more research... it's silly to treat
all compromizes as worst case scenarios...
[snip]
>>>A security compromise is a bad thing no matter what
>>>the offending program is capable of doing (as written).
>>
>>very true... and to be honest, my preference would be to not base the
>>decision to detect on cost, but that's just a preference...
>
>
> I'm at a loss to find any good criteria with which to base
> such a decision. The number of reported incidents that
> involved a certain program?
maybe include the time period over which those incidents occurred... 15
incidents in 3 years as opposed to 15 incidents in 3 days indicates
quite different threats...
This is a very basic and important point it seems.
>> But I wonder what you think about alerting on it as a Trojan.
>
>i would think it's just as silly... it doesn't harm your date or expose
>it to third parties, it doesn't make your system open to outside
>control, it doesn't even do anything to degrade your performance...
>
>> The
>> A.COM it creates is unterminated code which leads to unpredictable
>> results. The results include hangs, reboots, and ... who knows what?
>> If the the batch is presented as doing something worthwhile or good,
>> is it a Trojan since it might cause problems instead? But if so, do
>> you call all unterminated code a Trojan?
>
>no, i wouldn't... generally the probability of bytes left over in memory
>that get executed in such an instance actually forming something that
>can do damage are very slim...
That was my feeling as well.
>> How does your Trojan algorithm know how the batch was presented to the
>> user?
>
>there is no algorithm for that... that's the point of trying to remove
>the human element from the equation...
>
>> When you speak of coming up with a clear definition of Trojan
>> that can be implemented algorithmically, I immediately wonder how the
>> algorithm knows how the code was presented to the user :)
>
>it can't, which is why the current definition is so poor... i've since
>refined what i suggested so as to mean coming up with a new
>classification (rather than redefining trojan) that would include the
>undesirable capabilities of trojans but be entirely context blind...
>
>> Now, most av don't seem to think the batch is a Trojan either since
>> they don't alert. Is it mere nuisanceware not worthy of a Trojan
>> alert? Symantec seems to consider the thing a Trojan and some
>> versions of NAV alert on it (under odd circumstances which are in fact
>> false alarms).
>>
>> I'm sorry for beating this example to death since it's been discussed
>> in other threads. But in my mind, it's important to your/our purpose
>> in this thread. If something as relatively simple as the batch example
>> is cause for debate, we've got a lot of work cut out for us :)
>
>to my mind, the batch file itself is not a trojan, nor is it even a
>'potential trojan'... it does nothing destructive and it does nothing
>useful - i'd call it garbage...
That's what I called it as well.
>the *.com file it creates is a corrupt program, no more, no less...
Yep.
> this may seem like semantics, but... if the offspring can't do what the
> parent does, it isn't a replica of the parent... ergo the parent has not
> self-replicated...
Interesting point. It makes me wonder if the batman virus the
snippet resembles, qualifies as a virus. Neither form (bat nor com)
actually makes a replica. The com file can't do what the batch file
does, and the batch file can't do what the com file does. It seems
that the com file acts sort of like another thread would in a multi-
threaded process, bat spawns com, and com replicates bat and
terminates ~ poof.
> > But I wonder what you think about alerting on it as a Trojan.
>
> i would think it's just as silly... it doesn't harm your date or expose
> it to third parties, it doesn't make your system open to outside
> control, it doesn't even do anything to degrade your performance...
Not a virus, not even a trojan, maybe not even malware, but
it is a comdropper, as it drops a com file. I can only see it as
useful to have as *part* of a heuristic detection. Seeing one of
these should cause *some* concern because there is no reason
a legitimate program would use it.
..nor do I worry overmuch about an AV program disappearing
folders from my e-mail client.
To me, a nuisance program is an unauthorized program running
on the machine. If the nuisance program has a programming error
built in (intentionally or otherwise), it may be used maliciously. We
discussed earlier that programs such as these could be called
trojans (in the wider definition). There was an issue with a help
file and interprocess communication some time ago that became
a part of an overall exploit. I don't remember exactly what it was,
but I seem to remember that the flaw was at first thought to be
useless from the malware coding standpoint, but soon proved to
be otherwise (IIRC it jumped 'security zones' and executed a
program in the local zone context).
I guess what I'm saying is that I see no way around the 'whitelist'
approach, and if you choose to exclude certain programs from
being detected because they are trivial, you are lessening the
effectiveness of the detector for strengthening security.
Obviously, you can't have it detect *everything*, so I guess
you might as well draw the line at nuisanceware. Then I might
opt to buy the anti-malware program that does detect it, and
lets me decide whether to ignore it or not.
> if a company is afraid that some nuisance program has done more than
> just be a nuisance they obviously don't know enough about the program to
> base any conclusions and need to do more research... it's silly to treat
> all compromizes as worst case scenarios...
True enough, and if any company is really that concerned about
security, they would have a permissions whitelist and not need
to be concerned with a malware detector. Such a malware
detector would have to be aimed at the middle ground market
where the users are savvy enough to make use of it, but not
serious enough about security to realize its shortcomings.
>>>>@ECHO OFF
>>>>COPY %0 A.COM>NUL
>> It doesn't meet a fully qualified definition of virus, but it does
>> meet the "self replicating" _essential_ portion
>this may seem like semantics, but... if the offspring can't do what the
>parent does, it isn't a replica of the parent... ergo the parent has not
*self*-replicated... (my emphasis)
Computer virsus live in conditions conducive to evolution, i.e.
imperfect replication subject to selection pressure.
The "imperfection" of replication may be deliberate, such as
polymorphism, or a side-effect, e.g. as retention of lists of
previosly-infected systems, or SE content assembled from the host's
data material, or just a buffer or cluster slack space thing.
In biology, most higher-order reproduction is not "self-replicating"
in the true sense. This goes about alternating haploid and diploid
phases of the life cycle, sexual reproduction that serves to create
difference through cross-genetics, metamorphosis (e.g. tadpole to
frog) and alternate hosts (e.g. tapeworm in one host, cysts in a
different host - or malaria within mosquito vs. human hosts).
In malware, we often see polyfunctional entities that spread
themselves in different forms, depending on which mechanism used, as
well as those that carry daughter malware in the bomb-bay. For
example, if Klez drops Elkern, this isn't "self"-replication but is
definitely malware behavior, and viral in the classic sense that
Elkern is a code infector that cannot live outside of a host file.
>i would think it's just as silly... it doesn't harm your date or expose
>it to third parties, it doesn't make your system open to outside
>control, it doesn't even do anything to degrade your performance...
No, but it is rendered in a form that is easily extendable, and
travels in fully editable source form. This type of malware IMO is so
open to mutation that heuristic awareness rather than dumb bytepattern
detection would seem more appropriate.
IOW it would be trivial to graft a significant payload onto this,
using this particular snippet to "spread". As the snippet is the seat
of such core functionality, it might be worth detecting, one way or
another - though with risks of false-positives from installers etc.
>to my mind, the batch file itself is not a trojan, nor is it even a
>'potential trojan'... it does nothing destructive and it does nothing
>useful - i'd call it garbage...
At this stage, as a "potential" problem, you could consider it a zoo
specimen. Whether or not an av should detect it goes more about how
far you want to push heuristic awareness of not-yet-existing malware,
and most av's have back-pedalled a bit on extravigant claims there.
As to whether it's worth specifically detecting as-is, as an existing
malware; no, I'd not lose much sleep if it was missed.
ultimately the batman virus creates replicas of itself so it's a
virus... that it does it through the use of a 'helper' program that it
drops is not enough to disqualify it as self-replication...
the snippet that caused the false alarms dropped a dud as far as
'helper' programs go...
programs that create other completely different programs, which in turn
create other completely different programs, which in turn . . . do not
qualify as viruses... i'm not sure what they'd qualify as, if they
existed...
computer virus offspring, though they may not be byte for byte identical
to their parents, are functionally equivalent to their parents...
[snip]
> In biology, most higher-order reproduction is not "self-replicating"
but we're discussing neither biology, nor higher order reproduction...
and even if we were, i think you'd be hard pressed to disprove the
assertion that your body is functionally eqivalent to that of your
father's (barring injury/disease/etc) and for the most part your
mother's aswell... people don't give birth to elephants or muskrats,
viruses don't produce word processors...
an evolutionary capability sufficient enough to produce offspring that
are functionally different from their parents (i think) would generally
result in non-viable offspring (at least as far as computer viruses go)...
> In malware, we often see polyfunctional entities that spread
> themselves in different forms, depending on which mechanism used, as
> well as those that carry daughter malware in the bomb-bay. For
> example, if Klez drops Elkern, this isn't "self"-replication but is
> definitely malware behavior, and viral in the classic sense that
> Elkern is a code infector that cannot live outside of a host file.
indeed, but you wouldn't say that elkern is the offspring of klez...
>>i would think it's just as silly... it doesn't harm your date or expose
>>it to third parties, it doesn't make your system open to outside
>>control, it doesn't even do anything to degrade your performance...
>
>
> No, but it is rendered in a form that is easily extendable, and
> travels in fully editable source form.
so are all batch files...
> This type of malware IMO is so
> open to mutation that heuristic awareness rather than dumb bytepattern
> detection would seem more appropriate.
the batman virus is malware, yes, but the snippet causing false alarms?
copy %0 a.com ? maybe we should just outlaw batch files... heck, lets
get rid of all script languages...
> IOW it would be trivial to graft a significant payload onto this,
> using this particular snippet to "spread".
to my mind it would be equally trivial to graft that payload onto thin
air...
> As the snippet is the seat
> of such core functionality, it might be worth detecting, one way or
> another - though with risks of false-positives from installers etc.
well, i can't think of any good reason to have that line of code in a
batch file, and i do know of at least one bad reason to have that line
of code in a batch file (batman) so i agree that a malware scanner
should keep an eye out for it...
the point in calling something a nuisance program or a joke program is
to distinguish it from other, more serious malware... if something is a
trojan, it shouldn't be called a nuisance program and vice versa...
if a nuisance program has a programming error (errors aren't
intentional, by the way) then it's a buggy nuisance program, but if we
called everything that had a bug in it a trojan then there would be no
distinction between programs and trojans... holding nuisance programs
more accountable for bugginess than other software is a double standard...
if a 'nuisance' program has intentionally introduced code in it that
makes affected systems vulnerable to some secondary compromize then it
is not a nuisance program at all, it's something worse...
[snip]
>>if a company is afraid that some nuisance program has done more than
>>just be a nuisance they obviously don't know enough about the program to
>>base any conclusions and need to do more research... it's silly to treat
>>all compromizes as worst case scenarios...
>
>
> True enough, and if any company is really that concerned about
> security, they would have a permissions whitelist and not need
> to be concerned with a malware detector. Such a malware
> detector would have to be aimed at the middle ground market
> where the users are savvy enough to make use of it, but not
> serious enough about security to realize its shortcomings.
to be perfectly honest, i can see no way for white-lists to stand on
their own... black-lists can help you make better decisions about what
to allow on to your white-list...
>programs that create other completely different programs, which in turn
>create other completely different programs, which in turn . . . do not
>qualify as viruses... i'm not sure what they'd qualify as, if they
>existed...
Thinking of bi-phasic life cycles that occur in both biology and IT.
In biology, this often correlates to haploid vs. diploid phases; for
example, the human haploid phase is brief, unicellular, and different
according to sex; egg cells that exist purely within the body, mature
over a lifetime, and "ripen" once a month vs. very small and
energeticly motile unicellular sperms. But in "lower" organisms, the
haploid phase can be significant or even predominant, which still
differing hugely from the diploid form's structure and behaviour.
Other biological bi-phasic life cycles are those where different forms
live in different hosts, and alternate between the two forms. For
example, tapeworms may live in one host as cysts, but as gut worms in
the other host, or other parasites may take one form within snails,
mosquitos, flies etc. and another within humans.
In IT, we see complex malware that exhibit wide ranges of spreading
behaviour, and may alternate between these. For example, worm-spread
as auto-generted email message and attachment, then flit over LAN,
then infect existing files that are passively spread, etc.
Same goes for cross-platform malware that arive as .doc and leave as
.xls or .vbs, etc. In many cases, the part of the malware that is
irrelevant to the current host is carried inactive as "comments" or
just dead space, but is then expressed when the relevant host is
found. This is very much in keeping with how unexpressed genetic code
within a genotype is carried hidden within a phenotype, and just as
you cn have "thowback" code that isn't used, so one sees gill clefts
etc. in human embryos that have vanished by the time of birth.
>computer virus offspring, though they may not be byte for byte identical
>to their parents, are functionally equivalent to their parents...
See above. Even though IT "life" does not as yet combine with other
"life" ITW to spontaneously generate new forms, and selection pressure
plus random mutation thus far accounts for hardly any variance
("mutations" are nearly always created by human coders), we are seeing
partial offspring, and offspring that while containing all of the
code, are expressing only some of it according to the host involved.
>but we're discussing neither biology, nor higher order reproduction...
>and even if we were, i think you'd be hard pressed to disprove the
>assertion that your body is functionally eqivalent to that of your
>father's (barring injury/disease/etc) and for the most part your
>mother's aswell... people don't give birth to elephants or muskrats,
>viruses don't produce word processors...
Humans not only produce unicellular offspring but depend upon these to
create what we conventionally think of as their chidren (strictly
speaking, these are the product of the unicellular haploid offspring)
>an evolutionary capability sufficient enough to produce offspring that
>are functionally different from their parents (i think) would generally
>result in non-viable offspring (at least as far as computer viruses go)...
Yes, that's usually what happens; most random changes in code caused
by what is really damage to the code do result in buggy code that no
longer lives. If you throw random wads of junk into WINWORD.EXE, you
are unlikely to spontaneously end up with EXCEL.EXE; it's the million
monkeys with typewriter all over again.
What we refer to as "mutated" viruses in IT are more akin to genetic
engineering in biology. Whereas biology has been self-perpetuating
for millennia by the time we take out first steps in designed
"evolution", IT has always been designed, is as yet rarely
self-perpetualting, and has not developed a standard means of mixing
code to spontaneously generate new forms (which is what "sex" is).
The reason why "sex" doesn't make sense in IT is that there's very
little variation within a "species" of malware, because the code is
relatively small and reproduction is rarely imperfect (thus doesn't
generate variations spontaneously).
If there was only one type of human, one type of donkey, one type of
cat, then "sex" would not make sense; between cat and donkey, there'd
be too little in common to yeild a viable creature, and if all cats
were the same, no point in shuffling the (same) code between them.
>> In malware, we often see polyfunctional entities that spread
>> themselves in different forms, depending on which mechanism used, as
>> well as those that carry daughter malware in the bomb-bay. For
>> example, if Klez drops Elkern, this isn't "self"-replication but is
>> definitely malware behavior, and viral in the classic sense that
>> Elkern is a code infector that cannot live outside of a host file.
>indeed, but you wouldn't say that elkern is the offspring of klez...
You might; it's confusing enough to give pause. Elkern is Elkern and
Klez is Klez, yet most Elkern prolly spreads between PCs within the
bomb-bay of Klez (anyone seen a *pure* Elkern infection out there?)
This is like bacteria and bacteriophage viruses. Classic case; a
particular bacteria lives within the body and causes no problem, until
it's infected with a phage that injects code that "teaches" the
bacteria to produce a toxin that is damaging to the host. A disease
thus develops, attracting the attention of the host who then treats it
with antibiotics that kill the bacteria. But is the bacterium the
cause of the disease, or the phage that carries the code?
>>>i would think it's just as silly... it doesn't harm your date or expose
>>>it to third parties, it doesn't make your system open to outside
>>>control, it doesn't even do anything to degrade your performance...
>> No, but it is rendered in a form that is easily extendable, and
>> travels in fully editable source form.
>so are all batch files...
Exactlty; so are most scripts and macro malware, unless encoded in
some way (scripting environments may support encoding of source to
protect "intellectual property" rights of the author).
This is why relying on traditional mugshot recognition av as a primary
(or only) defence against scripts makes little sense to me; new
recogntion-breaking "mutations" are but a keyboard away.
>...maybe we should just outlaw batch files... heck, lets
>get rid of all script languages...
Yep; that's a far better approach to management of this risk.
Scripts exist so that ppl with skills between those of the software
author and end user can roll custom "code" to automate solutions for
particular installations. This is typically what the system
administrator of a LAN would be doing.
But let's say you are a stand-alone user who does not write scripts.
You use software that is written in "real" languages that don't use
scripts either. In fact, the only scripts you are likely to encounter
are those written by malware authors.
Why the hell would you want to keep live scripting engines running
around? Surely, disabling these scripting engines is a more effective
risk management for this type of malware than waiting for your av
vendor to code detection and cure for every variant that arises?
As the tag says...
>-- Risk Management is the clue that asks:
"Why do I keep open buckets of petrol next to all the
ashtrays in the lounge, when I don't even have a car?"
>----------------------- ------ ---- --- -- - - - -
[snipped some interesting stuff]
> Even though IT "life" does not as yet combine with other
> "life" ITW to spontaneously generate new forms, and selection pressure
> plus random mutation thus far accounts for hardly any variance
> ("mutations" are nearly always created by human coders), we are seeing
> partial offspring, and offspring that while containing all of the
> code, are expressing only some of it according to the host involved.
[...and some more here]
> The reason why "sex" doesn't make sense in IT is that there's very
> little variation within a "species" of malware, because the code is
> relatively small and reproduction is rarely imperfect (thus doesn't
> generate variations spontaneously).
If the parts of the 'human coders' (mutation*), and of the 'selection pressure'
(feedback reports from successful forms) were to be automated by a central
(worm server?), then all or at least most of the aforementioned restrictions
would be lifted. Do you think that some day in the future this could be realized?
I was pondering the view that organisms consist of cells in symbiosis, and
that viruses are mutant renegades that just happened to have met with some
success. In the IT analogy the parts lacking were code complexity, and the
ability to react (mutate) with enough diversity to overcome the faster changes
in environment due to host-hopping.
*a certain amount of 'mutation' could be programmed in, but if
a successful form were able to download a new (different) set
of vulnerability exploit code (different than the one it used to gain
access to its current host) from a central location (or locations)
and target IPs known to contain these vulnerabilities, it would
then come closer to approximating 'life' in its complexity.
(...well, this thread was already getting kinda weird) ;o)
>> Even though IT "life" does not as yet combine with other
>> "life" ITW to spontaneously generate new forms, and selection pressure
>> plus random mutation thus far accounts for hardly any variance
>> ("mutations" are nearly always created by human coders), we are seeing
>> partial offspring, and offspring that while containing all of the
>> code, are expressing only some of it according to the host involved.
>> The reason why "sex" doesn't make sense in IT is that there's very
>> little variation within a "species" of malware, because the code is
>> relatively small and reproduction is rarely imperfect (thus doesn't
>> generate variations spontaneously).
>If the parts of the 'human coders' (mutation*), and of the 'selection pressure'
>(feedback reports from successful forms) were to be automated by a central
>(worm server?), then all or at least most of the aforementioned restrictions
>would be lifted. Do you think that some day in the future this could be realized?
Yep. Selection pressure is already there, but doesn't have enough
genuine mutation to operate on (in "IT time", at least).
One could create a virtual "script kiddie" bot that mimics the limited
intelligence to cut and paste interpreted code into new malware, with
the payoff being overcoming mugshot recognition.
But I think what will facilitate this is the proposed new architecture
that will make the use of standard and mutually-swappable black-box
design attractive. "Trusted computing" will offer not only a hurdle,
but an opportunity; once past the barrier and into trusted space,
malware would be able to operate beyond the user's reach, protected by
the very system that was supposed to "solve" the problem. It can pose
as the protected product of a vendor, who knows nothing about how the
malware was encrypted etc. and therefore has no way to control it.
This system might extert selection pressure such that a modular form
required to apply boiler-plate methods against the system will also
permit mutual plug-in compatibility. Once there's a common standard
for ITW code exchange, you have your "sex" right there.
>I was pondering the view that organisms consist of cells in symbiosis, and
>that viruses are mutant renegades
Viruses aren't cellular - that's a big part of what makes them viruses
:-)
>that just happened to have met with some success.
Animal and plant cells have many things in common, including a
formalised nucleus and the sexual process of formally exchanging
genetic material. Plants often have cell walls, and may have
photosynthesis, whereas animals have neither.
But bacteria lack most of these formal cellular features, and have
different and more primitive alterntives. This difference is what
allows the biological equivalents of pattern recognition, heuristics,
risk management and targetted eradication.
That gives several opportunities to target bacteria's biological
processes without destroying the host; bacteria use different
"batteries" (ribosomes, the equivalent of mitochondria, etc.) and cell
membranes, so these can be attacked. Hence you can treat bacterial
bronchitis with antibiotics, but cn't do that for viral 'flu.
Outside the body (pre-infection, equivalent to inactive malware that
hasn't entered the system yet) you can just use methods that destroy
all life processes; antiseptics, disinfectants etc. The IT equivalent
would be an email policy that rejects all code or script attachments,
or setting your av to delete detected malware where the context is
clearly incoming external material scanned before use.
Viruses have hardly any processes at all, so there's nothing to target
except the unique genetic material itself, or peculiarities of the
capsule that this is hidden behind. That's why one usually has to
resort to immunology to get the fine resolultion required to
selectively destroy viruses, although sometimes there are unique
processes you can attack as well (e.g. HIV's reverse transcriptase).
That is what makes traditional malware viruses "viruses", to my way of
thinking in terms of how such taxonomy is useful:
- No, you cannot rely on Turing Test filtering, because the virus
may be within a file that really was intended to be sent by a human.
- No, you can't clean this by simply searching for known file names
or unexpected files and delete these.
- Yes, you do need a fine-resolution av tool to go within an
existing file and fillet out the virus content.
- Yes, you may leave the system inoperable or lose data if you
simply delete all files detected as containing the malware.
- No, there are no system-level settings to fix, in the case of a
pure virus (hah!) that operates purely at the lower level of "infect
file, await for infected file to be passively spread and triggered"
Because so few modern malware fulfil the last expectation in
particular, I tend to go on what I specifically read about each
malware rather than general taxonomic expectations :-)
To extend the analogy (or is it merely the same concepts arising in
two different instances of the class "evolutionary system"?): Just as
you can't clean malware hidden within mailboxes or SR data, so you
can't always kill biological viruses hidden within cells. Hence
shingles years after chickenpox, or recurrant cold sores.
>In the IT analogy the parts lacking were code complexity, and the
>ability to react (mutate) with enough diversity to overcome the faster changes
>in environment due to host-hopping.
>*a certain amount of 'mutation' could be programmed in, but if
>a successful form were able to download a new (different) set
>of vulnerability exploit code (different than the one it used to gain
>access to its current host) from a central location (or locations)
>and target IPs known to contain these vulnerabilities, it would
>then come closer to approximating 'life' in its complexity.
That's one approach, rather like the queen termite and ant model, or
update server and dowload bots :-)
Protecting the queen is the challenge there. In a "trusted computing"
world, commercial vandors such as MS or the media pimps will have that
ability, and function in exactly that manner. Institutionalised RATs,
call-home spyware, destructive payloads, malware, the works.
The other is to evolve a standard black-box code component that is
interchangeable; this is what defines species (that which can
interbreed, i.e. exchange genetic material witin the sexual
reproduction system) and facilitates generic interchange.
The problem with black-box based auto-mutation is that the black box
itself may have invariant features that facilitate recognition and
attack on the whole of that species; there may also be standard
behaviours that are heuristically detectable. That's why so many
"virus engines" such as DAME and whatever made the Anna Kournukovia
script mlware have not been such a crisis as one might expect.
A pure virus isn't as likely to spread nearly as effectively as one
that automates its spread (i.e. behaves more like a worm). It has the
advantage of passing through the Turing Test and firewall-level
heuristics, but has it's own heuristic exposure through the behaviours
it uses to infect existing files or boot code.
To comment further would feed the script kiddies, so I'll stop there.
>(...well, this thread was already getting kinda weird) ;o)
Hey, weird is guuud... <g>
>------------ ----- ---- --- -- - - - -
Our senses are our UI to reality
>------------ ----- ---- --- -- - - - -
Intended symbiosis gone awry.
> This system might extert selection pressure such that a modular form
> required to apply boiler-plate methods against the system will also
> permit mutual plug-in compatibility. Once there's a common standard
> for ITW code exchange, you have your "sex" right there.
>
> >I was pondering the view that organisms consist of cells in symbiosis, and
> >that viruses are mutant renegades
>
> Viruses aren't cellular - that's a big part of what makes them viruses
> :-)
The point was that the code provided the cell with instructions
to manufacture that which benefits the community, and the code,
once mutated, no longer has the well being of the community as
a priority. TCPA changes the paradigm, but it still has the same
underlying 'flaw' which makes malware an eventual inevitability.
I mentioned this in response to the whole 'which came first the
chicken or the egg' question you posed with regard to the bio
virus and the host system. It may well be that the host system
merely provided the proper environment to make a mutant
successful, and the mutation, if it indeed predated the host
system, did not at that time meet with any such success.
> > This would all be integral to a properly implemented whitelisting-only
> > scheme and such a scheme happily sidesteps the issue of "what is a Trojan"
> > by providing a fundamentally better "solution" than current blacklisting
> > approaches -- it allows that which has been authorized to run to run and
> > prevents everything else. Under a whitelisting approach, the issue of
> > whether something is good, bad, viral, Trojanic or just undetermined is
> > irrelevant. The question at issue is "has a suitable authority cleared
> > this code for execution (by this user) on this machine?".
>
> point of fact, no, that is not the question at issue... not when we're
> talking about trojans...
'Tis...
> the white-list method, strong though it may be, is not a panacea, it
Properly implemented in a corporate environment it would be nearly
infinitely better than what we have now if for no other reason than
the AV upgrade shuffle would be ended... It is better for other
reasons too -- foir example, the recently announced hole in
MailSweeper's MIME attachment detection and filtering would not be
a worry were you whitelisting as any unknown or otherwise
unauthorized code that sneaks past your "border patrol" will still
not havce a chance to run.
> does not solve all problems... it addresses the issue of unauthorized
> execution... unauthorized execution is, in general, not at issue when
> we're talking about trojans... when we're talking about trojans the
> issue is making bad authorization decisions, which a white-list cannot
> help us with...
Of course it helps you. It is pointless whitelisting unless you
couple it with equally strong execution prohibitions as traditional
blacklisting virus scanners employ. If you have a product to enforce
a "you may only run this code" (whitelist) policy, you certainly would
not implement it such that "any user" could override as otherwise it
would be an entirely wasteful exercise.
> if a user knew the a particular file was a trojan, do you think he'd try
> to run it? ...
Stuff typical users -- what about professional network and system
admins who are expected to know better? Have you not heard the
stories about some of these fine folk who, even after hearing about
LoveLetter all over the news and being bombarded with warnings about
it between their carparks and desks, _still_ double-clicked it "just
to see if it was as bas as everyone said"??
Believe me -- we need whitelisting...
> ... no of course not, he probably wouldn't even bother
> downloading it... but since he doesn't know that he'll happily download
> it and equally happily ignore the white-list alert that can say little
> more than "this is an unknown program"...
Alert be damned. Smart whitelisting operations would log securely log
forbidden execution attempts and collate them to HR for disciplinary
measures, reducing bonus or other raises, etc as suitable.
> > Of course, this approach will not work with typical (non-corporate) users
> > as the whole "problem" you are discussing here stems from the simple
> > observation that very few people are actually prepared to take proper
> > responsibilty for the code they choose to run on their machines and given
> > that they will not be prepared to expend the (small) effort required to
> > maintain the whitelist authorization mechanism.
>
> that is not a fair characterization.. in fact, i think you're not giving
> credit to the complexity of problem...
>
> security (all security) comes down to one quintessential concept...
> *trust*... establishing trust and enforcing trust boundaries throughout
> a system... a white-list can be very effective at enforcing those trust
> boundaries but it falls down completely when it comes to establishing
> trust in the first place (ie. deciding what to trust when adding items
> to the white-list - something that would invariably need to occur)...
Actually, I disgree slightly. Trustworthiness comes into it, but at
the end of the day it is a balancing act - what is the benefit vs. the
(likely) exposure? Where do we accept we are comfortable on the
continuum between any benefit of the technology is worth any cost to
there is no benefit great enough for us to increase our current risk
exposure.
For most corporate IT systems there are relatively simple methods for
establishing to a sufficient degree of certainty that a piece of code
whose use is proposed somewhere in the organization is "trustworthy
enough". Test networks, installation and ongoing change monitoring,
etc, etc, etc. Not enough companies do this at all though because they
have (stupidly) bought into the "commodity computer, commodity software"
myth. (Well, arguably both exist -- the myth is that because they exist
and can reduce your upfront costs they are "good for business".)
> arguably, establishing trust is the harder (and more significant)
> problem (if we could do that then would we need a white-list application
> in the first place?)...
Actually, yes.
Whitelisting is a halfway-house between the loose and sloppy, "bucket
security" approach of discretionary access controls and the formidably
complex and probably too constraining apart from the highest of high-
security (military information-type) systems. MS has _partially_
acknowledged this problem with the addition of Software Restriction
Policies to XP, but of course these were implemented in a security-
and integrity-ignorant way (about the only way MS programmers are able
to implement anything) as they ignore a plethora of sources of what a
contemporary Windows system will consider as "code", and thus leave
far to many holes in this particular security net...
> where the white-list falls down, the black-list (and in my suggestions
> in this thread, the 'grey-list') does not... the trojan is a trojan
> because the user is basing his/her evaluation of trust on false
> information and the black-list/grey-list represents a knowledge base
> that has the potential to correct the user's false impression and
> thereby facilitate better judgement...
That's fine if tyou can accept the (very high) error rate and actually
want your users to make security critical decisions that history shows
they are unable to make reliably and in a way that maintains of reduces
the employers' risk exposure...
> > Such "potentially dangerous but also useful" applications should only be
> > authorized to be run by suitably "qualified" users.
>
> agreed... but that isn't so much a malware issue as it is a user rights
> issue... (ie. trust boundaries pertaining to wetware entities rather
> than software entities)
Agreed -- being able to coordinate that type of policy with the other,
very similar kinds of policies that whitelisting would allow just seems
to make sense from a parsimony of administration effort perspective...
> >>then the problem is how to authorize changes to the ignore list without
> >>letting the malware see the private key - that requires some kind of
> >>clean boot technique/technology... the app can save proposed changes to
> >>the ignore list that are subject to review and authorization during a
> >>clean boot...
> >
> > Under a full-run whitelisting scheme, this is irrelevant, as no
> > "questionable" code should be able to be running anyway. If it is, that
> > means the whitelist authorizer has slipped up which means the system is
> > screwed anyway independent of the whitelisting system.
>
> you make the question of what to authorize and what not to authorize
> sound trivial... we both know it isn't...
It is relatively trivial for a well designed and run corporate network.
It involves snap-shotting the standard builds and any and all "updates"
that have to be pushed out to them after the system is field-deployed.
That is a relatively small amount of work for most of the machines in
a typical, contemporary, well-designed corporate network.
Of course, you can question how many suuch networks actually are so
well-designed and run. I cannot answer that, but if the answer is "not
many" then the issue of ROI for all that IT kit would have to be raised
with the management. Fix the design and maintenance design and a whole
lot of other "wasteful" IT spedning will fall off, _especially_ if you
add a good whitelisting code integrity management system to the mix.
It is a _lot_ harder for extant systems that have been haphazrd (or
entirely un-) managed. Almost impossible, in fact, unless you can
catalog each and every piece of code already in place and tie it back
to a "trusted" (in the weak sense of "already known to our system so we
know how to proceed in this case"). I introdce the doubters among you
to Turing and his Halting Problem...
> ... and this problem is not
> independant of the whitelisting system... a whitelisting system needs to
> allow users/administrators to add items to the whitelist, the system
> would be useless without that... you cannot ignore the problem of
> establishing trust levels when you're enforcing trust boundaries... it
> is a cart without a horse to pull it...
I really don't see that this is a very difficult issue to solve at all.
Public key crypto should take care of such issues as authenticating such
"trust updates", as any modification to the whitelist is. I think the
hardest thing is ensuring the client machines implementing the policies
actually update in a timely manner -- this is only really a "problem"
in the (hopefully very rare) circumstances where you need to revoke the
trust vested in already deployed code, but often when that is what is
needed, it _is_ crucial to distribute the update ASAP.
--
Nick FitzGerald
[read: "in a perfect world"]
> in a corporate environment
[read: "in a perfect arbitrary subset of a perfect world"]
> it would be nearly
> infinitely better than what we have now if for no other reason than
> the AV upgrade shuffle would be ended...
it would make the situation different, not necessarily better...
> It is better for other
> reasons too -- foir example, the recently announced hole in
> MailSweeper's MIME attachment detection and filtering would not be
> a worry were you whitelisting as any unknown or otherwise
> unauthorized code that sneaks past your "border patrol" will still
> not havce a chance to run.
unless you were expecting the attachment and chose to authorize it...
>>does not solve all problems... it addresses the issue of unauthorized
>>execution... unauthorized execution is, in general, not at issue when
>>we're talking about trojans... when we're talking about trojans the
>>issue is making bad authorization decisions, which a white-list cannot
>>help us with...
>
>
> Of course it helps you. It is pointless whitelisting unless you
> couple it with equally strong execution prohibitions as traditional
> blacklisting virus scanners employ. If you have a product to enforce
> a "you may only run this code" (whitelist) policy, you certainly would
> not implement it such that "any user" could override as otherwise it
> would be an entirely wasteful exercise.
as if 'any user' is the threat... next you'll say that viruses don't
work on *nix because they can't get root access... administrators make
poor decisions too... and frankly, in some cases it's not even a matter
of the decisions being poor, but rather the decisions are based on
inaccurate information and therefore may be as good as they can possibly
be under those circumstances and still land the decider in trouble...
>>if a user knew the a particular file was a trojan, do you think he'd try
>>to run it? ...
>
>
> Stuff typical users -- what about professional network and system
> admins who are expected to know better? Have you not heard the
> stories about some of these fine folk who, even after hearing about
> LoveLetter all over the news and being bombarded with warnings about
> it between their carparks and desks, _still_ double-clicked it "just
> to see if it was as bas as everyone said"??
>
> Believe me -- we need whitelisting...
never said we didn't... what i said was that it wasn't a panacea, it
doesn't solve all problems and it leaves a big one open... how does one
know if something is really safe to add to the whitelist? where do we go
to find the magically accurate descriptions of what arbitrary content
really does? who is the keeper of the truth and where is their website?
a whitelist is worthless if it is empty, and it will remain empty until
you add something to it so you need to have the best possible
information available for deciding what to add and what not to add...
blacklists have the potential to provide information about a subject
peice of active content that might not otherwise be available to the
person making that decision...
>>... no of course not, he probably wouldn't even bother
>>downloading it... but since he doesn't know that he'll happily download
>>it and equally happily ignore the white-list alert that can say little
>>more than "this is an unknown program"...
>
>
> Alert be damned. Smart whitelisting operations would log securely log
> forbidden execution attempts and collate them to HR for disciplinary
> measures, reducing bonus or other raises, etc as suitable.
the existence of the aforementioned hr department and/or those negative
motivators is not, as far as i am concerned, a part of a whitelist
system... they are part of a corporate whitelist policy (and they
wouldn't necessarily be corporate wide, but rather departmentally
localized - lets see hr 'discipline' the CEO - lets see a whitelist that
works in an administrative environment where administrators *develop*
automation tools)...
>>>Of course, this approach will not work with typical (non-corporate) users
>>>as the whole "problem" you are discussing here stems from the simple
>>>observation that very few people are actually prepared to take proper
>>>responsibilty for the code they choose to run on their machines and given
>>>that they will not be prepared to expend the (small) effort required to
>>>maintain the whitelist authorization mechanism.
>>
>>that is not a fair characterization.. in fact, i think you're not giving
>>credit to the complexity of problem...
>>
>>security (all security) comes down to one quintessential concept...
>>*trust*... establishing trust and enforcing trust boundaries throughout
>>a system... a white-list can be very effective at enforcing those trust
>>boundaries but it falls down completely when it comes to establishing
>>trust in the first place (ie. deciding what to trust when adding items
>>to the white-list - something that would invariably need to occur)...
>
>
> Actually, I disgree slightly. Trustworthiness comes into it, but at
> the end of the day it is a balancing act - what is the benefit vs. the
> (likely) exposure?
that is not security, nor what security is about... that is risk
management... it is what security people actually practice day to day,
because the ability to solve the trust problems perfectly does not exist...
> Where do we accept we are comfortable on the
> continuum between any benefit of the technology is worth any cost to
> there is no benefit great enough for us to increase our current risk
> exposure.
>
> For most corporate IT systems there are relatively simple methods for
> establishing to a sufficient degree of certainty that a piece of code
> whose use is proposed somewhere in the organization is "trustworthy
> enough". Test networks, installation and ongoing change monitoring,
> etc, etc, etc. Not enough companies do this at all though because they
> have (stupidly) bought into the "commodity computer, commodity software"
> myth. (Well, arguably both exist -- the myth is that because they exist
> and can reduce your upfront costs they are "good for business".)
and if everyone could afford that resource expenditure, the world would
be a happier place...
>>arguably, establishing trust is the harder (and more significant)
>>problem (if we could do that then would we need a white-list application
>>in the first place?)...
>
>
> Actually, yes.
>
> Whitelisting is a halfway-house between the loose and sloppy, "bucket
> security" approach of discretionary access controls and the formidably
> complex and probably too constraining apart from the highest of high-
> security (military information-type) systems. MS has _partially_
> acknowledged this problem with the addition of Software Restriction
> Policies to XP, but of course these were implemented in a security-
> and integrity-ignorant way (about the only way MS programmers are able
> to implement anything) as they ignore a plethora of sources of what a
> contemporary Windows system will consider as "code", and thus leave
> far to many holes in this particular security net...
i disagree... if we had some magically perfect way to know if something
were trustworthy, we simply wouldn't touch things that didn't pass that
test...
>>where the white-list falls down, the black-list (and in my suggestions
>>in this thread, the 'grey-list') does not... the trojan is a trojan
>>because the user is basing his/her evaluation of trust on false
>>information and the black-list/grey-list represents a knowledge base
>>that has the potential to correct the user's false impression and
>>thereby facilitate better judgement...
>
>
> That's fine if tyou can accept the (very high) error rate and actually
> want your users to make security critical decisions that history shows
> they are unable to make reliably and in a way that maintains of reduces
> the employers' risk exposure...
if i've failed to make this clear before, let me try again... i think
blacklists/greylists *complement* whitelists... i don't think corporate
users should be given override access to a whitelist system, and i don't
particularly see how a greylist, per se, would have a high error rate
(remember, a greylist wouldn't be saying "this is bad" or "this is not
bad", as that would make it a blacklist)...
[snip]
>>>Under a full-run whitelisting scheme, this is irrelevant, as no
>>>"questionable" code should be able to be running anyway. If it is, that
>>>means the whitelist authorizer has slipped up which means the system is
>>>screwed anyway independent of the whitelisting system.
>>
>>you make the question of what to authorize and what not to authorize
>>sound trivial... we both know it isn't...
>
>
> It is relatively trivial for a well designed and run corporate network.
and determining if a program halts is trivial for a particular subset of
all programs... that doesn't make the problem solved...
[snip]
> It is a _lot_ harder for extant systems that have been haphazrd (or
> entirely un-) managed. Almost impossible, in fact, unless you can
> catalog each and every piece of code already in place and tie it back
> to a "trusted" (in the weak sense of "already known to our system so we
> know how to proceed in this case"). I introdce the doubters among you
> to Turing and his Halting Problem...
and i introduce to you the concept of packaging up specialist knowledge
of bad things and/or things that have been known to be bad in certain
contexts, and making it available to the public so they can further
narrow their field of 'trusted' content *and* make potentially better
decisions about new content coming in, in those circumstances where
throwing it out by default isn't an option...
>>... and this problem is not
>>independant of the whitelisting system... a whitelisting system needs to
>>allow users/administrators to add items to the whitelist, the system
>>would be useless without that... you cannot ignore the problem of
>>establishing trust levels when you're enforcing trust boundaries... it
>>is a cart without a horse to pull it...
>
>
> I really don't see that this is a very difficult issue to solve at all.
> Public key crypto should take care of such issues as authenticating such
> "trust updates",
pki can tell you someone/thing is what he/she/it claims to be, but not
whether or not you can trust him/her/it... it does not establish trust
in an entity, it associates existing trust with an entity...
>> Selection pressure is already there, but doesn't have enough
>> genuine mutation to operate on (in "IT time", at least).
>> One could create a virtual "script kiddie" bot that mimics the limited
>> intelligence to cut and paste interpreted code into new malware, with
>> the payoff being overcoming mugshot recognition.
>> But I think what will facilitate this is the proposed new architecture
>> that will make the use of standard and mutually-swappable black-box
>> design attractive. "Trusted computing" will offer not only a hurdle,
>> but an opportunity; once past the barrier and into trusted space,
>> malware would be able to operate beyond the user's reach, protected by
>> the very system that was supposed to "solve" the problem. It can pose
>> as the protected product of a vendor, who knows nothing about how the
>> malware was encrypted etc. and therefore has no way to control it.
>Intended symbiosis gone awry.
Yep. But then, that's the beauty of the evolutionary model; it
ignores intent and looks at possibility, on the basis that if there is
sufficient complexity, the possible becomes probable.
All that "sufficient complexity" means is a large sample size, where
complexity is sample density per unit of time.
With low complexity, you can still figure things from first
principles, i.e. system is still deterministic. It's like trying to
remember how many cars you have; it's easy, you just go onto the
garage and count them, no need for measurement tools etc.
With high complexity, determinism breaks down. There are two factors
there; the imperfection factor, which generates unexpected results,
and the ways in which things can interact or combine, which overwhelms
attempts to ennumerate all possible states from initial parameters.
Both of these factors operate in IT, so that instead of predicting
what will happen from the principles of the system (design intent
etc.) it can be useful or necessary to consider the system from the
outside, as if one knew nothing about how it works (more accurately;
so that one assumes nothing about how it works).
Just as a sufficiently rich "organic soup" might give rise to "life",
so a sufficiently complex IT world may give rise to the equivalent.
>> >I was pondering the view that organisms consist of cells in symbiosis, and
>> >that viruses are mutant renegades
>> Viruses aren't cellular - that's a big part of what makes them viruses
>> :-)
>The point was that the code provided the cell with instructions
>to manufacture that which benefits the community, and the code,
>once mutated, no longer has the well being of the community as
>a priority.
Ah, that's intent again. Selection pressure darwins out that which is
not best fitted to survive, but a new mutation is by definition yet to
be filtered in this way, so all bets are off :-)
>TCPA changes the paradigm, but it still has the same underlying
>'flaw' which makes malware an eventual inevitability.
Yep. In the long run it's interesting to see whether designed
mutation (e.g. genetic engineering, state-ordered society, attempts to
design in security) fares any better than spontaneous mutation (e.g.
natural selection, free-market economy, open source perhaps).
>I mentioned this in response to the whole 'which came first the
>chicken or the egg' question you posed with regard to the bio
>virus and the host system. It may well be that the host system
>merely provided the proper environment to make a mutant
>successful, and the mutation, if it indeed predated the host
>system, did not at that time meet with any such success.
Where biology is concerned, origins are quite tricky. One supposes a
tobacco mosaic virus arose after tobacco in the same way that clone
replacement Epson ink cartridges arose after the targetted Epson
printer, but there may be some surprises.
I read some speculation about mitochondria, which are the cell's
"batteries", generating required energy. It was claimed that these
organelles have their own genetic code and in fact might be
intracellular symbiotes that arose independently - perhaps giving the
primordial soup the kickstart it needed to develop complexity.
The "hand of God"?
It's interesting to imagine a far advanced and evolved world of IT,
where the human originators are long forgotton. It wouldn't take
much; extrapolate CAD/CAM, mix in cheap automated space mining etc.,
self-spawn outwards from there, and there you are (or rather, there
you aren't, but their "your" bots are)
>---------- ----- ---- --- -- - - - -
[x] Always trust Microsoft
That's what happened to the E. coli bacteria about 21-22 years ago. It is a
need bacteria to help digestion. Trouble is, a virus infected the cow's E.
coli bacteria and if the bacteria was "sent" to humans(through a water pipe
and they drank the water), it multipled and made the human sick. That what
happened with the Walkerton Ontario Canada water supply (Off Topic!) in May
2000. (I was just waiting to mention about Walkerton the next time someone
talked about bacteria)
Yes, I live in Walkerton and I am quite well today.At the time it happened I
was more concerned about the E. coli than the LoveLetter virus which was
also big news at that time.
>
> Computer Viruses follow a similiar path, they provide the host file with
the
> instructions which inturn the host uses these instructions to create
another virus.
> The fact that the virtual computer virus tries to replicate in a similiar
method
> to its real biological cousins is no accident either, the Author Inventor
actually
> had a clue about Viruses and obviously wanted his simulation to be as
realistic as
> possible for a rather interesting computer program.
>
> For me deciding whether a program is a virus or a worm is a trivial affair
when
> deciding on my own classifications.
>
>
A relatively simple matter, that.
> >> But I think what will facilitate this is the proposed new architecture
> >> that will make the use of standard and mutually-swappable black-box
> >> design attractive. "Trusted computing" will offer not only a hurdle,
> >> but an opportunity; once past the barrier and into trusted space,
> >> malware would be able to operate beyond the user's reach, protected by
> >> the very system that was supposed to "solve" the problem. It can pose
> >> as the protected product of a vendor, who knows nothing about how the
> >> malware was encrypted etc. and therefore has no way to control it.
>
> >Intended symbiosis gone awry.
>
> Yep. But then, that's the beauty of the evolutionary model; it
> ignores intent and looks at possibility, on the basis that if there is
> sufficient complexity, the possible becomes probable.
>
> All that "sufficient complexity" means is a large sample size, where
> complexity is sample density per unit of time.
[snip]
> Just as a sufficiently rich "organic soup" might give rise to "life",
> so a sufficiently complex IT world may give rise to the equivalent.
Survival of an idividual is one thing, but there is power
in numbers as well, especially if there is cooperation of sorts.
> >> >I was pondering the view that organisms consist of cells in symbiosis, and
> >> >that viruses are mutant renegades
>
> >> Viruses aren't cellular - that's a big part of what makes them viruses
> >> :-)
>
> >The point was that the code provided the cell with instructions
> >to manufacture that which benefits the community, and the code,
> >once mutated, no longer has the well being of the community as
> >a priority.
>
> Ah, that's intent again. Selection pressure darwins out that which is
> not best fitted to survive, but a new mutation is by definition yet to
> be filtered in this way, so all bets are off :-)
Indeed, but consider that the replicative malware that has
successfully accessed a host environment not only is now
able to replicate there, but also able to report its success
to some center for the purpose of weighting the success
rate of progeny. You have selection pressure with a tuning
bias, as well as a source for increasing sufficient complexity
in the darwinian model.
Yes, the weakness would be the centralized nature of the
database, but even that could probably be distributed or
otherwise decentralized.
> >TCPA changes the paradigm, but it still has the same underlying
> >'flaw' which makes malware an eventual inevitability.
>
> Yep. In the long run it's interesting to see whether designed
> mutation (e.g. genetic engineering, state-ordered society, attempts to
> design in security) fares any better than spontaneous mutation (e.g.
> natural selection, free-market economy, open source perhaps).
Darwinian evolution (survival of the best adapted) requires
a rather lengthy run, and as you say a wide variance of
mutations to explore most of the possibilities presented
by the (ever changing) environment. It seems to me that
TCPA would make the environment appear changeless
from the standpoint of programs. That is one somewhat
limiting factor malware wouldn't have to deal with.
> >I mentioned this in response to the whole 'which came first the
> >chicken or the egg' question you posed with regard to the bio
> >virus and the host system. It may well be that the host system
> >merely provided the proper environment to make a mutant
> >successful, and the mutation, if it indeed predated the host
> >system, did not at that time meet with any such success.
>
> Where biology is concerned, origins are quite tricky. One supposes a
> tobacco mosaic virus arose after tobacco in the same way that clone
> replacement Epson ink cartridges arose after the targetted Epson
> printer, but there may be some surprises.
True, the tobacco mosaic virus' code may have existed prior
to its being *noticed* as a result of its success with tobacco
host cells. It may have been a common mutation of the code,
but in the wrong environment.
> I read some speculation about mitochondria, which are the cell's
> "batteries", generating required energy. It was claimed that these
> organelles have their own genetic code and in fact might be
> intracellular symbiotes that arose independently - perhaps giving the
> primordial soup the kickstart it needed to develop complexity.
>
> The "hand of God"?
...or the literal translation of 'created in His image'?
(or Hers as the case may be... hmmm mitochondrial DNA)
> It's interesting to imagine a far advanced and evolved world of IT,
> where the human originators are long forgotton. It wouldn't take
> much; extrapolate CAD/CAM, mix in cheap automated space mining etc.,
> self-spawn outwards from there, and there you are (or rather, there
> you aren't, but their "your" bots are)
..the birth of computers, lost in the mists of antiquity.