Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Exploiting Linux vulnerabilities?

3 views
Skip to first unread message

JPB

unread,
Jan 21, 2006, 10:31:10 AM1/21/06
to
We know that it's possible for flaws in software running on Linux, for
example:
KDE flaws put Linux, Unix systems at risk
http://news.zdnet.com/2110-1009_22-6029297.html
ClamAV hole sees Linux vendors rush out updates
http://www.techworld.com/security/news/index.cfm?NewsID=5176
so (at least in principle) we know that a remote attacker [might] be able to
get arbitrary code to execute on [some] Linux machines.

We also know that with Linux, that would usually only be the first step to
taking over a machine, as the exploit code would typically only execute
with user privilege, limiting what it could do. The cry then goes up from
Linux detractors and Microsoft apologists, to claim that doesn't matter,
because the malicious code can still do "everything it needs to", such as
use the network, send email, and access or damage user data.

However, I don't quite get this, because when I look at the behaviour of
actual virus/worm/trojan attacks against Windows, that *isn't* all that
successful and automated malicious code needs to do; getting to execute
[once] as a local user is only the start for malicious code attacking
Linux, with further obstacles to overcome in order to propagate
effectively.

If it's a case of exploiting a buffer overflow or similar, then often the
malicious code that executes is just a stub, which retrieves further code.
Unlike the recent Microsoft/WMF flaw, which took advantage of a "feature"
*designed* to run code from a data file, it's likely that unplanned
weaknesses such as overflows run a risk of crashing processes, rather than
executing any arbitrary code at all, and so it makes sense for malicious
code to be crafted as the smallest possible stub, which can then remotely
pull in the rest of the attack.

So far, so good for the malicious code, and the same could be done under
Linux for an initial stub to "bootstrap" itself. Now what does the putative
virus/worm/trojan "want" to do, as it were? There is obviously an immediate
top priority, if it is to propagate:
==> Ensure that it and/or any downloaded "little friends" get to
automatically execute again when the computer restarts. <==
If it can't do that, then it may only ever get one chance to execute. It's
quite likely to crash by itself anyway, as especially with Linux there is
no way it can anticipate in advance what sort of system environment it
might be trying to execute under, and if it causes any noticeably odd
system behaviour the computer may well be restarted deliberately by the
user.

Either way, unless it can hook itself up to auto-start in future, it's not
going to be doing anything for very long, and it's not going to propagate
itself effectively if it gets wiped out in the next reboot. Setting itself
to auto-run might be easy on Windows; to do the same on my Linux box not so
easy, and it will "want" to acquire root privilege to be sure of doing so.

So getting root access *is* important for malicious code to do, and local
user exploits on their own are insufficient; nor is rooting a Linux box
necessarily that easy to do (a subject for another post!). If denied root
access, then propagation of the malicious code is going to be ineffective.

What about the user data? Of course our putative malicious code is able to
access or damage the user's data, which is important for the user in a way
that the OS files are not. However, damaging data is a bad tactic for
malicious code to use; for example, at the extreme end if it deletes the
user's data outright, it is guaranteed that the user is going to notice and
do something about it, whether undertaking their own recovery or seeking
more skilled assistance. Either way, the malicious code has also just
guaranteed that it's going to get wiped out as well, which means it won't
be propagating on anywhere any more.

That won't bring back that user's data; but it does ensure that the impact
will be isolated and the infection won't spread. Just like a real virus,
killing your host, and especially killing your host quickly, also means
that the infection won't spread in the population as a whole. The more
destructive the malicious code is, the less effectively will it propagate;
the less destructive and more stealthy it can be, the more likely it is to
survive and keep propagating for months or even years to come.

Why do I go on and on about propagation? Because that is the essential
element for successful automated malicious code, and the more propagation
is made difficult or denied, the more the population is secured from
attack. *That* is where better security and better immunity from
mass/automated malicious code comes from, for people using Linux.

Not from "security through obscurity", or relative popularity. Not because
there are never any exploits possible against Linux, nor because it's
impossible for a determined "black hat" to specifically target and
compromise a particular Linux box. Security and immunity comes because
automatic propagation is difficult or impossible, so that if a threat which
can take advantage of a possible exploit does arise, it dies out fast,
affecting at most only a few of the potentially vulnerable users.

Again, just like real world infections, when there is no monoculture and
propagation is difficult, there is no need to achieve 100% immunity for
every member of the population - all that is necessary is sufficient
immunity to slow and halt propagation so that any spreading infection
follows an exponential decay curve, rather than an exponential growth
curve.

And that *is* a perfectly achievable level of security and immunity to aim
for, and which Microsoft and Windows software lacks.

--
JPB

Linønut

unread,
Jan 21, 2006, 11:20:42 AM1/21/06
to
After takin' a swig o' grog, JPB belched out this bit o' wisdom:

> And that *is* a perfectly achievable level of security and immunity to aim
> for, and which Microsoft and Windows software lacks.

Interesting analysis. Damn, there is so much out there to learn about
computers and networks!

--
Wean yourself from the Microsoft teat!

billwg

unread,
Jan 21, 2006, 12:29:33 PM1/21/06
to

"Linųnut" <"=?iso-8859-1?Q?lin=F8nut?="@bone.com> wrote in message
news:XJ-dnQUxRorHwk_e...@comcast.com...

> After takin' a swig o' grog, JPB belched out this bit o' wisdom:
>
>> And that *is* a perfectly achievable level of security and immunity
>> to aim
>> for, and which Microsoft and Windows software lacks.
>
> Interesting analysis. Damn, there is so much out there to learn about
> computers and networks!
>
The whole premise is that it takes a better hacker to compromise a linux
system if it is not being used with root privilege. So the casual
hacker doesn't bother with you, just the serious hackers? That's a real
comfort, nut! LOL!!!


ray

unread,
Jan 21, 2006, 12:33:36 PM1/21/06
to
I realize that these possibilities exist. MS proponents tell me we only
have 'security by obscurity'. My point - OK, so what? We still do not SEE
malware attacking *nix systems - good enough for me.

JPB

unread,
Jan 21, 2006, 1:22:55 PM1/21/06
to
ray wrote:

Fortunately we do not rely on "security by obscurity", the reason we have
effective security and immunity is a bit more subtle than that. There were
a couple of interesting, if controversial, articles recently from Bill
Thompson:
Mac security concerns answered
http://news.bbc.co.uk/1/hi/technology/4620548.stm
Mac users 'too smug' over security
http://news.bbc.co.uk/1/hi/technology/4609968.stm
Unsurprising he got slashdotted and flamed for saying what we did - one
trouble perhaps being that he sometimes over-simplifies to appeal to a
wider audience.

However he did pick up on a crucial point, which is about "herd immunity"
and the difficulty of spreading infection on *nix-like systems, even if he
didn't appreciate how far "herd immunity" exists and is effective, not only
for Apple Mac, but even more so for Linux. Spyware, for example, is
potentially much more difficult to put in place on free/libre open-source
systems such as Linux than on closed/proprietary systems, though that's a
subject for a fresh article.

--
JPB

JPB

unread,
Jan 21, 2006, 1:46:00 PM1/21/06
to
JPB wrote:
<snip>

> http://news.bbc.co.uk/1/hi/technology/4620548.stm
> Mac users 'too smug' over security
> http://news.bbc.co.uk/1/hi/technology/4609968.stm
> Unsurprising he got slashdotted and flamed for saying what we did - one
^^^^
should read "... saying what he did ..."

> trouble perhaps being that he sometimes over-simplifies to appeal to a
> wider audience.
>

<snip>


--
JPB

Peter

unread,
Jan 21, 2006, 2:07:57 PM1/21/06
to
billwg wrote:


> The whole premise is that it takes a better hacker to compromise a linux
> system if it is not being used with root privilege. So the casual
> hacker doesn't bother with you, just the serious hackers? That's a real
> comfort, nut! LOL!!!

Except that while it is fairly easy for malware writers to take over armies
of Windows boxes by mass production techniques, it is far, far more
difficult to take over armies of Linux machines at root level. This
requires a more complex human interactive approach, especially to get
privilege elevation once a user account has been compromised.

Peter

unread,
Jan 21, 2006, 2:13:25 PM1/21/06
to
JPB wrote:


> What about the user data? Of course our putative malicious code is able to
> access or damage the user's data, which is important for the user in a way
> that the OS files are not. However, damaging data is a bad tactic for
> malicious code to use; for example, at the extreme end if it deletes the
> user's data outright, it is guaranteed that the user is going to notice
> and do something about it, whether undertaking their own recovery or
> seeking more skilled assistance. Either way, the malicious code has also
> just guaranteed that it's going to get wiped out as well, which means it
> won't be propagating on anywhere any more.
>

From a system administration point of view, 'cleaning up' a user account is
quite easy, just rename the old account, set up a 'clean' /home/user
directory and copy user's application data files (but not configuration
files - ie the 'hidden' directories) to the new directory.

By the way, I have created a specific 'user' on my system just for internet
banking so security is maintained even if someone manages to get at my
usual 'user' configurations.

JPB

unread,
Jan 21, 2006, 2:54:55 PM1/21/06
to
Peter wrote:

> JPB wrote:
>
>
>> What about the user data? Of course our putative malicious code is able
>> to access or damage the user's data, which is important for the user in a
>> way that the OS files are not. However, damaging data is a bad tactic for
>> malicious code to use; for example, at the extreme end if it deletes the
>> user's data outright, it is guaranteed that the user is going to notice
>> and do something about it, whether undertaking their own recovery or
>> seeking more skilled assistance. Either way, the malicious code has also
>> just guaranteed that it's going to get wiped out as well, which means it
>> won't be propagating on anywhere any more.
>>
> From a system administration point of view, 'cleaning up' a user account
> is quite easy, just rename the old account, set up a 'clean' /home/user
> directory and copy user's application data files (but not configuration
> files - ie the 'hidden' directories) to the new directory.
>

Quite a bit easier and more reliable than hoping third-party anti-virus
software can try and repair a damaged registry, and recover other settings,
isn't it? Especially considering malware frequently damages or disables
anti-virus software once it manages to run once. System Restore on Windows,
or re-installation of Windows (and all your third-party apps as well...)
isn't necessarily a walk in the park either.

Considering how easy it is to switch between distributions, I'd expect it
shouldn't even be much more difficult to recover a "rooted" Linux system,
once identified. Just use a live CD to grab the user directory data for
safe-keeping, re-install just the OS, which isn't an onerous task, and
clean up as you describe any user directories which were compromised on the
way to rooting the system. Sorted - and again much easier than trying to
delouse an OS where the system and the user data are not as clearly
separated.



> By the way, I have created a specific 'user' on my system just for
> internet banking so security is maintained even if someone manages to get
> at my usual 'user' configurations.

Haven't yet felt the need to do that, but it makes a lot of sense. Perhaps I
should take that view regarding any closed source software I use, such as
Skype, since such things offer a potential route for spyware. Then I could
use only free/libre software on my normal user identity, and run any
closed-source apps that I persuade myself are really necessary in its own
separated-off user environment, where it couldn't touch anything else even
if it wanted to.

Hadn't fully realised quite how useful this user-privilege separation can be
- sounds like a good plan!

--
JPB

Erik Funkenbusch

unread,
Jan 21, 2006, 5:07:53 PM1/21/06
to
You have a number of flaws in your argument, as well as a lot of "hoping".

On Sat, 21 Jan 2006 15:31:10 +0000, JPB wrote:

> We also know that with Linux, that would usually only be the first step to
> taking over a machine, as the exploit code would typically only execute
> with user privilege, limiting what it could do.

Apart from everything else, you seem to think that a 2 staged attack is
improbable. This is pretty far from the case. We've seen lots of
"blended" attacks, like nimda, where it tries dozens of different
multi-stage attacks. Nothing is stopping someone from writing a worm that
gets in via a small user flaw, then downloads potentially hundreds or
thousands of different local root exploits to tray and gain advantage of
the root account.

It's not that improbable, and should worm authors start targeting linux
more often, you can bet it will happen. I've noticed a lot of Linux users
that simply do not take local root vulnerabilities seriously because they
don't allow anyone else to log in to their machines.

> However, I don't quite get this

Naturally, because you don't want to.

> ==> Ensure that it and/or any downloaded "little friends" get to
> automatically execute again when the computer restarts. <==

never heard of .profile? That ensures that any time a user logs in,
certain settings get set, which can include running any program. Now you
COULD make .profile a root owned account write inaccessible to the user,
but honestly, how often does that happen? Users want to edit their
.profile.

While, indeed, this only allows the program to run when the user is logged
in, many users are logged in all the time.

> Either way, unless it can hook itself up to auto-start in future, it's not
> going to be doing anything for very long, and it's not going to propagate
> itself effectively if it gets wiped out in the next reboot. Setting itself
> to auto-run might be easy on Windows; to do the same on my Linux box not so
> easy, and it will "want" to acquire root privilege to be sure of doing so.

<sarcasm>Wait, Linux users are always telling me about their 20 year
uptimes. Are you suggesting Linux users might actually shut down their
machines? </sarcasm>

> So getting root access *is* important for malicious code to do, and local
> user exploits on their own are insufficient; nor is rooting a Linux box
> necessarily that easy to do (a subject for another post!). If denied root
> access, then propagation of the malicious code is going to be ineffective.

Rooting a Windows box didn't used to be an easy thing to do either. What
happens, is you get large scale groups of users working on the problem, and
before you know it you've goot tons of root exploit code floating around.
Metasploit, for example, serves this purpose (though Metasploit is also
useful for security professionals too).

> What about the user data? Of course our putative malicious code is able to
> access or damage the user's data, which is important for the user in a way
> that the OS files are not. However, damaging data is a bad tactic for
> malicious code to use;

In other words, you "hope" that they don't want to damage the data. Even
if they don't damage it, they can certainly look at it, and send that data
back to whomever they want. usernames, passwords, banking information,
whatever.. Further, a virus can be written to run for weeks before doing
damage, allowing itself maximum spread time.

There are two basic classes of attacker these days. script kiddies, and
professionals. The script kiddies probably get a kick out of deleting
everyones data. The professionals, however, are looking for zombies that
they can use as spam gateways or for other reasons. You don't have to be
root to be a spam gateway.

JPB

unread,
Jan 21, 2006, 7:12:46 PM1/21/06
to
Erik Funkenbusch wrote:

> You have a number of flaws in your argument, as well as a lot of "hoping".
>

Thought there'd be a Funkenbusch reply. Oh well, regular readers of this
forum know well enough what *that* means without needing twice telling.

> On Sat, 21 Jan 2006 15:31:10 +0000, JPB wrote:
>
>> We also know that with Linux, that would usually only be the first step
>> to taking over a machine, as the exploit code would typically only
>> execute with user privilege, limiting what it could do.
>
> Apart from everything else, you seem to think that a 2 staged attack is
> improbable. This is pretty far from the case. We've seen lots of
> "blended" attacks, like nimda, where it tries dozens of different
> multi-stage attacks. Nothing is stopping someone from writing a worm that
> gets in via a small user flaw, then downloads potentially hundreds or
> thousands of different local root exploits to tray and gain advantage of
> the root account.

Remind me, did Nimda affect Linux?

>
> It's not that improbable, and should worm authors start targeting linux
> more often, you can bet it will happen. I've noticed a lot of Linux users
> that simply do not take local root vulnerabilities seriously because they
> don't allow anyone else to log in to their machines.
>

What it does mean is that a 2-stage attack is *required* to have any hope of
getting anywhere, and not only that but it's necessary to find one that is
effective on a widespread basis. It's not about any *one* target being
invulnerable, but about the difficulty of successful propagation through
the population.



>> However, I don't quite get this
>
> Naturally, because you don't want to.
>

Turnabout is fair play - you don't want to understand why Linux is generally
resistant to exploitation, and Microsoft is generally vulnerable to
exploitation.

>> ==> Ensure that it and/or any downloaded "little friends" get to
>> automatically execute again when the computer restarts. <==
>
> never heard of .profile? That ensures that any time a user logs in,
> certain settings get set, which can include running any program. Now you
> COULD make .profile a root owned account write inaccessible to the user,
> but honestly, how often does that happen? Users want to edit their
> .profile.
>
> While, indeed, this only allows the program to run when the user is logged
> in, many users are logged in all the time.
>

Preventing arbitrary/non-privileged programs from setting code stored in
individual /home/~ directories from running automatically on login is
probably a good idea.

>> Either way, unless it can hook itself up to auto-start in future, it's
>> not going to be doing anything for very long, and it's not going to
>> propagate itself effectively if it gets wiped out in the next reboot.
>> Setting itself to auto-run might be easy on Windows; to do the same on my
>> Linux box not so easy, and it will "want" to acquire root privilege to be
>> sure of doing so.
>
> <sarcasm>Wait, Linux users are always telling me about their 20 year
> uptimes. Are you suggesting Linux users might actually shut down their
> machines? </sarcasm>
>

Any malicious code needs to make sure it can run again in future, if
propagation is to continue. Machines that aren't servers are likely to get
rebooted in the reasonably near future, especially if they're being
attacked by malicious code that may be causing crashes and other issues.

>> So getting root access *is* important for malicious code to do, and local
>> user exploits on their own are insufficient; nor is rooting a Linux box
>> necessarily that easy to do (a subject for another post!). If denied root
>> access, then propagation of the malicious code is going to be
>> ineffective.
>
> Rooting a Windows box didn't used to be an easy thing to do either. What
> happens, is you get large scale groups of users working on the problem,
> and before you know it you've goot tons of root exploit code floating
> around. Metasploit, for example, serves this purpose (though Metasploit is
> also useful for security professionals too).
>

Usually it's not necessary to "root" a Windows box - getting arbitrary code
to run as 1-stage process is by and large sufficient. Where are the tons of
root exploit code for Linux floating around again?



>> What about the user data? Of course our putative malicious code is able
>> to access or damage the user's data, which is important for the user in a
>> way that the OS files are not. However, damaging data is a bad tactic for
>> malicious code to use;
>
> In other words, you "hope" that they don't want to damage the data. Even
> if they don't damage it, they can certainly look at it, and send that data
> back to whomever they want. usernames, passwords, banking information,
> whatever.. Further, a virus can be written to run for weeks before doing
> damage, allowing itself maximum spread time.
>

It's a trade-off - the more destructive of data it is, the less it's going
to propagate, and the less of a threat it's going to be to the population.

If it delays before doing damage, then the number of infections that get
cleaned first starts to shoot up. When it does do damage, whatever's left
is committing suicide in the process. The upshot is that damage is
inversely proportional to how widespread it can become, and propagation is
inversely proportional to how damaging it is - result, it's not surprising
that serious damage to data is rare.

Finding sensitive information buried somewhere on a hard-drive is a tough
data-mining task to automate, especially with non-monoculture heterogeneous
software like Linux - it's not like being able to set up key-logging that
waits for particular key-sequences, such as a well-known bank site, and
then grabs the password and report back to the mothership.

What might be possible in practice, without root, is to report back to the
mothership that the attack has managed to get local user access, for a
cracker to investigate and try again at leisure - but extending that to
rooting the machine on an automated basis is difficult, especially against
heterogeneous Linux boxes, and it's liable to need interactive human
cracker analysis to get anywhere.

> There are two basic classes of attacker these days. script kiddies, and
> professionals. The script kiddies probably get a kick out of deleting
> everyones data. The professionals, however, are looking for zombies that
> they can use as spam gateways or for other reasons. You don't have to be
> root to be a spam gateway.

If "script kiddies" delete everyone's data, then their scripts aren't going
to propagate far. As for zombies, it's difficult to zombify a machine and
keep it that way without gaining root.

--
JPB

Aragorn

unread,
Jan 21, 2006, 10:57:20 PM1/21/06
to
On Saturday 21 January 2006 23:07, Erik Funkenbusch stood up and spoke
the following words to the masses in /comp.os.linux.advocacy...:/

> On Sat, 21 Jan 2006 15:31:10 +0000, JPB wrote:
>
>> ==> Ensure that it and/or any downloaded "little friends" get to
>> automatically execute again when the computer restarts. <==
>
> never heard of .profile? That ensures that any time a user logs in,
> certain settings get set, which can include running any program. Now
> you COULD make .profile a root owned account write inaccessible to the
> user, but honestly, how often does that happen? Users want to edit
> their .profile.

Two points here...
(1) It is effectively impossible to disallow the user write access to
*~/.profile* except for when an ACL is used that overrides the
permissions of the directory the file is in.

One could make root the owner of the file, but the file resides in the
user's home directory and therefore the standard UNIX permissions give
the user the ability to delete and recreate the file.

It's very easy even to keep the original file's contents. Just open up
a terminal window and issue the following string of commands...:

cp .profile .myprofile && rm -f .profile && mv .myprofile .profile

After the last command in the series - I've used "&&" as the command
separator to ensure that none of the steps are executed unless the
previous one has successfully completed - the file *~/.profile* will
have the same contents as the original but will be owned by the user
and will have the permissions mask as defined by the /umask/ settings.

(2) It is possible to set up the system without that a *~/.profile*
exists - with all of the stuff that normally goes in there having been
defined in */etc/bash_profile,* unwise as that would be - but nothing
prevents a user from creating a *~/.profile.* It's his own home
directory, so he can do with it as he pleases.

>> Either way, unless it can hook itself up to auto-start in future,
>> it's not going to be doing anything for very long, and it's not going
>> to propagate itself effectively if it gets wiped out in the next
>> reboot. Setting itself to auto-run might be easy on Windows; to do
>> the same on my Linux box not so easy, and it will "want" to acquire
>> root privilege to be sure of doing so.
>
> <sarcasm>Wait, Linux users are always telling me about their 20 year
> uptimes. Are you suggesting Linux users might actually shut down
> their machines? </sarcasm>

<sarcasm payback>

But of course many GNU/Linux users power down their machine or
reboot their system. They were conditioned to consider that to
be normal practice by their prior exposure to Windows.

</sarcasm payback>
<grin>

>> So getting root access *is* important for malicious code to do, and
>> local user exploits on their own are insufficient; nor is rooting a
>> Linux box necessarily that easy to do (a subject for another post!).
>> If denied root access, then propagation of the malicious code is
>> going to be ineffective.
>
> Rooting a Windows box didn't used to be an easy thing to do either.
> What happens, is you get large scale groups of users working on the
> problem, and before you know it you've goot tons of root exploit code
> floating around. Metasploit, for example, serves this purpose (though
> Metasploit is also useful for security professionals too).

It has *always* been fairly easy to root a Windows box in comparison to
a GNU/Linux machine, even if only because of the lax default security
set-up.

>> What about the user data? Of course our putative malicious code is
>> able to access or damage the user's data, which is important for the
>> user in a way that the OS files are not. However, damaging data is a
>> bad tactic for malicious code to use;
>
> In other words, you "hope" that they don't want to damage the data.
> Even if they don't damage it, they can certainly look at it, and send
> that data back to whomever they want. usernames, passwords, banking
> information, whatever.. Further, a virus can be written to run for
> weeks before doing damage, allowing itself maximum spread time.

I'm afraid you are overlooking the fact that malicious code does not run
itself in UNIX the way that is possible in Windows via the execution of
e-mail attachments - granted, in Microsoft's own e-mail clients - and
that the diversity of client applications such as browsers or office
suits - even if OpenOffice seems the prevalent suit on GNU/Linux and
Solaris - makes any attempt to spread such viruses statistically less
effective.

It also already imposes another problem to the virus makers, i.e. which
application to target? On Windows, that's fairly easy: aim at IE and
OE, and you'll have the greatest effect.

The weakness of monoculture...

> There are two basic classes of attacker these days. script kiddies,
> and professionals. The script kiddies probably get a kick out of
> deleting everyones data.

My experience with /scr1pt/ /k1dd13s/ is that conducting DDoS attacks
using zombies are what they get off on the most. Harassment is their
game.

They also typically target Windows more because that is what they use
themselves and what they know. They're typically far less bright than
what they pretend to be, as you could notice yourself from the flurries
of forged quotes launched through anonymizing gateways by disposable
and unique poster names on this group.

> The professionals, however, are looking for zombies that they can use
> as spam gateways or for other reasons. You don't have to be root to
> be a spam gateway.

A typical UNIX user possesses enough knowledge of computer systems to
recognize any unauthorized processes running on his machine, and in
order for spam e-mail to not leave any traces after having been sent,
it would have to run from a different e-mail application than the one
the user is normally using.

--
With kind regards,

*Aragorn*
(Registered GNU/Linux user #223157)

GreyCloud

unread,
Jan 21, 2006, 11:05:03 PM1/21/06
to

What it boils down to is that Ewik doesn't know how permission bits are
set for processes by the o/s.


--
Where are we going?
And why am I in this handbasket?

Aragorn

unread,
Jan 22, 2006, 12:05:04 AM1/22/06
to
On Sunday 22 January 2006 05:05, GreyCloud stood up and spoke the

And we could even add - although I'm sure that Erik knows this as well
by now - that the Linux kernel can make use of a software-implemented
/NX/ bit on IA32 when the kernel is compiled for 64 GB memory support.

For those who don't follow the logic...: If a 32-bit Linux kernel has
support for up to 64 GB of physical memory - this is possible only on
i686 and above - the CPU must be running in PAE mode, i.e. "Physical
Address Extension". In PAE mode, the CPU uses 36 bits for memory
addressing, which are of course used in a paged mode - typically with 3
GB of virtual address space for each process - as the kernel can only
use 32 bits at once.

When the CPU is running in PAE mode, the Linux kernel can implement an
/NX/ bit _via_ _software_ on IA32. IA32 is the only modern
microprocessor that still doesn't have an /NX/ implementation in
hardware. IA32-64 and IA64 do have that feature in hardware, but it
can of course only be used when they're running in 64-bit mode.

For those who have a genuine IA64 - i.e. the Itanium or Itanium-2 - or
who have an IA32-64 CPU - such as AMD64 or EM64T - and who wish to run
a 32-bit version of GNU/Linux, PAE is of course also available there.

The default choice in the Linux kernel is to use /NX/ whenever possible,
i.e. via hardware when in 64-bit mode on a 64-bit CPU, via software
when the CPU is in 32-bit PAE mode. It would take a boot parameter to
disable it, and I can't imagine anyone _wanting_ to disable it.

As far as I know, only special corporate versions of Windows - 2003
Server and above, probably - have support for up to 64 GB of physical
memory and thus for PAE. The natively 64-bit versions of Windows do of
course also feature support for a hardware /NX/ implementation.

I do however not know whether Windows implements a software /NX/ bit on
its 32-bit 64 GB-supporting versions. What I do know is that it does
use the hardware /NX/ implementation on machines that have it, but that
excludes all IA32 machines and they are still in the majority.

One *could* of course counter this - and I'm quite sure that Erik would
if I hadn't written the following myself - that not all Linux kernels
are compiled for PAE support. In fact, most distributions still
compile their kernels for a one-size-fits-all i586 CPU, and with
support for only 1 GB of memory.

On the other hand, many distributions supply extra kernels along with
their installation, and some of those kernels may have 64 GB support.
Mandriva is such a distribution, and I believe Fedora Core and RedHat
also provide for such "heavier" kernels.

The free nature of GNU/Linux does of course give every user -
intellectually capable or not - the opportunity to "roll his own",
either by making a few simple changes to the *.config* of the stock
distro kernel or by fetching, configuring and compiling a vanilla
kernel.

GNU/Linux users who use Gentoo, LFS, Slackware and many of the users of
other distributions are already familiar with using custom-configured
and custom-compiled kernels. This also narrows down the range of
GNU/Linux users who would be affected by a genuine virus that were
explicitly intended to run on any modern GNU/Linux system and that were
to work by explicitly executing code from within a data-holding memory
page.

This target range is even narrowed down further by the headstart to the
kernel developers on the virus developers provided by the rapid
evolution of the kernel code - not even to mention userspace
application development. Security has always been a concern in
UNIX/POSIX and thus in GNU/Linux, as opposed to in Microsoft products.
This is one other reason as to why Windows systems will always be more
likely to be targeted by the writers of malware.

Whether it is on Windows or on GNU/Linux, or on whatever other system,
/NX/ does work and should not be dismissed. It's an important asset in
the battle for more security from malware, and that's why it's being
used and endorsed.

Erik Funkenbusch

unread,
Jan 22, 2006, 4:11:39 AM1/22/06
to
On Sun, 22 Jan 2006 03:57:20 GMT, Aragorn wrote:

>> never heard of .profile? That ensures that any time a user logs in,
>> certain settings get set, which can include running any program. Now
>> you COULD make .profile a root owned account write inaccessible to the
>> user, but honestly, how often does that happen? Users want to edit
>> their .profile.
>
> Two points here...

Well, you could use something like chattr to make the file immutable, but
your point stands. There are several ways to make programs start when the
user logs in. Bailo's arguments here are spurious.

>> Rooting a Windows box didn't used to be an easy thing to do either.
>> What happens, is you get large scale groups of users working on the
>> problem, and before you know it you've goot tons of root exploit code
>> floating around. Metasploit, for example, serves this purpose (though
>> Metasploit is also useful for security professionals too).
>
> It has *always* been fairly easy to root a Windows box in comparison to
> a GNU/Linux machine, even if only because of the lax default security
> set-up.

I use the term "root" here to refer to taking over a machine remotely. No,
it hasn't "always" been easy. Yes, trojans and the like have always been
pretty easy, but exploiting buffer overflows used to be a difficult thing
to do, but is now ridiculously simple because there are now toolkits to
generate buffer overflow code.

>> In other words, you "hope" that they don't want to damage the data.
>> Even if they don't damage it, they can certainly look at it, and send
>> that data back to whomever they want. usernames, passwords, banking
>> information, whatever.. Further, a virus can be written to run for
>> weeks before doing damage, allowing itself maximum spread time.
>
> I'm afraid you are overlooking the fact that malicious code does not run
> itself in UNIX the way that is possible in Windows via the execution of
> e-mail attachments - granted, in Microsoft's own e-mail clients - and
> that the diversity of client applications such as browsers or office
> suits - even if OpenOffice seems the prevalent suit on GNU/Linux and
> Solaris - makes any attempt to spread such viruses statistically less
> effective.

You're also overlooking the fact that we're talking about a specific
vulnerability that allows arbitrary remote execution of data. The flaw in
Konqueror's javascript parser.

> It also already imposes another problem to the virus makers, i.e. which
> application to target? On Windows, that's fairly easy: aim at IE and
> OE, and you'll have the greatest effect.

You're under the impression that an attacker can only attack one app.
That's totally not true. Blended attacks are common these days. They can
combine attacks for hundreds of programs into one virus.

> The weakness of monoculture...

not being one doesn't make you immune.

>> There are two basic classes of attacker these days. script kiddies,
>> and professionals. The script kiddies probably get a kick out of
>> deleting everyones data.
>
> My experience with /scr1pt/ /k1dd13s/ is that conducting DDoS attacks
> using zombies are what they get off on the most. Harassment is their
> game.

Indeed.

> They also typically target Windows more because that is what they use
> themselves and what they know. They're typically far less bright than
> what they pretend to be, as you could notice yourself from the flurries
> of forged quotes launched through anonymizing gateways by disposable
> and unique poster names on this group.

No, they target Windows because it gives them the most bang for the buck.

>> The professionals, however, are looking for zombies that they can use
>> as spam gateways or for other reasons. You don't have to be root to
>> be a spam gateway.
>
> A typical UNIX user possesses enough knowledge of computer systems to
> recognize any unauthorized processes running on his machine, and in
> order for spam e-mail to not leave any traces after having been sent,
> it would have to run from a different e-mail application than the one
> the user is normally using.

That argument only holds so long as "normal" users don't start using Linux.

Erik Funkenbusch

unread,
Jan 22, 2006, 4:15:48 AM1/22/06
to
On Sun, 22 Jan 2006 05:05:04 GMT, Aragorn wrote:

> And we could even add - although I'm sure that Erik knows this as well
> by now - that the Linux kernel can make use of a software-implemented
> /NX/ bit on IA32 when the kernel is compiled for 64 GB memory support.

NX doesn't need to be used in PAE mode to work.

> As far as I know, only special corporate versions of Windows - 2003
> Server and above, probably - have support for up to 64 GB of physical
> memory and thus for PAE. The natively 64-bit versions of Windows do of
> course also feature support for a hardware /NX/ implementation.

XP SP2 and 2003 SP1 added support for NX as well.

Erik Funkenbusch

unread,
Jan 22, 2006, 4:34:53 AM1/22/06
to
On Sun, 22 Jan 2006 00:12:46 +0000, JPB wrote:

>> Apart from everything else, you seem to think that a 2 staged attack is
>> improbable. This is pretty far from the case. We've seen lots of
>> "blended" attacks, like nimda, where it tries dozens of different
>> multi-stage attacks. Nothing is stopping someone from writing a worm that
>> gets in via a small user flaw, then downloads potentially hundreds or
>> thousands of different local root exploits to tray and gain advantage of
>> the root account.
>
> Remind me, did Nimda affect Linux?

You completely missed the point. The point was that blended attacks are
common now. Just because an attack would require two components to gain
root doesn't make it unlikely. This is childs play these days. Literally.

Nimda used dozens of attack vectors, and something similar could be written
for Linux to attack multiple vectors.

> What it does mean is that a 2-stage attack is *required* to have any hope of
> getting anywhere, and not only that but it's necessary to find one that is
> effective on a widespread basis. It's not about any *one* target being
> invulnerable, but about the difficulty of successful propagation through
> the population.

Not true at all. Root is not necessary. As I said, you don't need root to
get the program to restart after a reboot, because the program can write
itself to the .Profile, and Linux users seem to very proud of their uptimes
anyways, so the machine could be running for weeks or months anyways.

>>> However, I don't quite get this
>>
>> Naturally, because you don't want to.
>
> Turnabout is fair play - you don't want to understand why Linux is generally
> resistant to exploitation, and Microsoft is generally vulnerable to
> exploitation.

No, Linux is just resistant to many of the common attacks used agains
Windows. That's not the same thing, but you think it is.

>> never heard of .profile? That ensures that any time a user logs in,
>> certain settings get set, which can include running any program. Now you
>> COULD make .profile a root owned account write inaccessible to the user,
>> but honestly, how often does that happen? Users want to edit their
>> .profile.
>>
>> While, indeed, this only allows the program to run when the user is logged
>> in, many users are logged in all the time.
>
> Preventing arbitrary/non-privileged programs from setting code stored in
> individual /home/~ directories from running automatically on login is
> probably a good idea.

And how would you do that? The whole purpose of the .Profile is to execute
startup programs when a user logs in. You could make the /home partition
non-executable, but that wouldn't stop programs from being stored in, say
/tmp from running. And, setting /tmp to be non-executable might break some
programs that expect to be able to store temporary executable code there.

Still, even if you did manage to make your system completely incapable of
storing any executable code as a normal user, that wouldn't be the common
configuration for most users out there.

>> <sarcasm>Wait, Linux users are always telling me about their 20 year
>> uptimes. Are you suggesting Linux users might actually shut down their
>> machines? </sarcasm>
>
> Any malicious code needs to make sure it can run again in future, if
> propagation is to continue. Machines that aren't servers are likely to get
> rebooted in the reasonably near future, especially if they're being
> attacked by malicious code that may be causing crashes and other issues.

One more time. .Profile. Do I have to say it again?

> Finding sensitive information buried somewhere on a hard-drive is a tough
> data-mining task to automate, especially with non-monoculture heterogeneous
> software like Linux - it's not like being able to set up key-logging that
> waits for particular key-sequences, such as a well-known bank site, and
> then grabs the password and report back to the mothership.

No, it's not tough. Applicaiton file formats are pretty common knowledge.
You can look for files of a certain type, and use regular expressions to
extract data, or simply email those files somewhere else. Again, you're
"hoping" that it's simply too difficult to bother, but that's not a
defense.

JPB

unread,
Jan 22, 2006, 7:04:05 AM1/22/06
to
GreyCloud wrote:

> Aragorn wrote:
<snip>


>>
>> Two points here...
>> (1) It is effectively impossible to disallow the user write access to
>> *~/.profile* except for when an ACL is used that overrides the
>> permissions of the directory the file is in.
>>
>> One could make root the owner of the file, but the file resides in the
>> user's home directory and therefore the standard UNIX permissions give
>> the user the ability to delete and recreate the file.
>>
>> It's very easy even to keep the original file's contents. Just open up
>> a terminal window and issue the following string of commands...:
>>
>> cp .profile .myprofile && rm -f .profile && mv .myprofile
>> .profile
>>
>> After the last command in the series - I've used "&&" as the command
>> separator to ensure that none of the steps are executed unless the
>> previous one has successfully completed - the file *~/.profile* will
>> have the same contents as the original but will be owned by the user
>> and will have the permissions mask as defined by the /umask/ settings.
>>

OK, got that.



>> (2) It is possible to set up the system without that a *~/.profile*
>> exists - with all of the stuff that normally goes in there having been
>> defined in */etc/bash_profile,* unwise as that would be - but nothing
>> prevents a user from creating a *~/.profile.* It's his own home
>> directory, so he can do with it as he pleases.
>>


Without a ~/.profile sounds nearer to what I was after. Two situations come
to mind:
1) Larger scale multi-user, where the user does *not* own the computer, but
the administrators may well take the view that what the user does in the
home directory is up to them. This isn't really the case I'm referring to,
as in that case you'll have administrators managing the thing, depending
what security policy they element.
2) Smaller scale - not necessarily one user/one machine, but likely with few
users, no administrator as such, and quite likely the user(s) do in fact own
the whole computer, so can do with it as they please, not just the home
directory but all of it. There's lots and lots of these, and more of them
are running Linux as time passes.

So what I'm after is not so much preventing a user from being able to update
their ~/.profile, but preventing a program from altering it without
explicitly having to get user interaction/authorisation - a bit like having
to sudo to root, except to your *own* id.

Even more general than that - it's not updating .profile that's a problem in
itself, or having it call programs already installed/available on the
system. It's the possibility that code might put executables somewhere like
~/, and set it auto-run on login, without interacting with the user for
permission. Making /home non-executable might achieve the desired effect,
but seems a bit too restrictive - it would be enough if it's possible to
ensure that files written cannot be made executable without explicit
interaction with the user, maybe along similar lines to su/sudo privilege
escalation.

Maybe it's not too much of an issue for us at present - it's much less
serious than malicious code being able to dig an auto-restart program in to
the system so that it executes regardless of who logins in (maybe before
login), as is usually trivial to achieve on a Windows system.

However, perhaps we should be asking ourselves some of these questions, and
acting on them if need be as Linux becomes more popular. Thinking about
what malicious code needs to do for automated propagation, if it can be
made non-trivial for attacking code to recursively set itself to execute on
restart, then that interlock is one way to break the
reproduction/propagation cycle - and everytime we do that it makes
potential attacks liable to fizzle rather than explode exponentially.

--
JPB

GreyCloud

unread,
Jan 22, 2006, 12:39:18 PM1/22/06
to

We know that, but you don't seem to understand how Linux and UNIX o/s
assign permission to files and processes.

Go to page 92 of "Advanced Programming in the UNIX Environment" 2nd edition.
You'll see how malicious programs like nimbda would fail.

Erik Funkenbusch

unread,
Jan 22, 2006, 3:02:08 PM1/22/06
to
On Sun, 22 Jan 2006 12:04:05 +0000, JPB wrote:

> However, perhaps we should be asking ourselves some of these questions, and
> acting on them if need be as Linux becomes more popular. Thinking about
> what malicious code needs to do for automated propagation, if it can be
> made non-trivial for attacking code to recursively set itself to execute on
> restart, then that interlock is one way to break the
> reproduction/propagation cycle - and everytime we do that it makes
> potential attacks liable to fizzle rather than explode exponentially.

No. The problem is not that Linux has attack vectors. It will always have
them. If, for no other reason, human nature, just like any other OS.

The problem is that too many Linux users believe they are immune to
everything, and assume that just becaues they may not be susceptible to
some forms of attack, it means that other forms of attack are immune as
well. Attackers are intelligent, and creative. They will find and exploit
any hole they can, no matter how difficult, if they decide to target you.

Just because an attack might be difficult today doesn't mean it will be
tomorrow.

Tim Smith

unread,
Jan 22, 2006, 4:03:59 PM1/22/06
to
In article <AoDAf.171340$4D1.6...@phobos.telenet-ops.be>,

Aragorn <str...@telenet.invalid> wrote:
> (1) It is effectively impossible to disallow the user write access to
> *~/.profile* except for when an ACL is used that overrides the
> permissions of the directory the file is in.
>
> One could make root the owner of the file, but the file resides in the
> user's home directory and therefore the standard UNIX permissions give
> the user the ability to delete and recreate the file.

Unix is more flexible than that, if you think a little bit out of the
box. Given a user foo, do the following:

1. Create a group foo, with user foo as the only member.

2. Set up this directory:

/home/foo/foo

with the following ownerships and permissions:

/home/foo root:foo 550
/home/foo/foo root:root 1777

3. Set /home/foo/foo as foo's home directory.

4. Make /home/foo/foo/.profile owned by root.

Essentially, the user's home directory works like /tmp this way, so only
the owner of a file can delete it. That requires making the home
directory world writable, so to protect it, it has to be put in a
directory that only the user has x permission on, so no one else can
reach it.

ACLs, of course, are a lot less hassle (if you can figure them out from
the poor documentation...).

--
--Tim Smith

Tim Smith

unread,
Jan 22, 2006, 4:33:27 PM1/22/06
to
In article <dquinu$mh6$1$8300...@news.demon.co.uk>,

JPB <news{@}europa{.}demon{.}co{.}uk> wrote:
> What it does mean is that a 2-stage attack is *required* to have any hope of
> getting anywhere, and not only that but it's necessary to find one that is
> effective on a widespread basis. It's not about any *one* target being
> invulnerable, but about the difficulty of successful propagation through
> the population.

You'd be correct if this were the late 80's. Nowadays, the goal of most
worms is to use the target machine as part of a botnet. No rooting is
required to do this. Just getting code on the machine running as a user
and having a way to survive a reboot is good enough.

...


> Preventing arbitrary/non-privileged programs from setting code stored in
> individual /home/~ directories from running automatically on login is
> probably a good idea.

cron.

Other ways to get code to run again later that come to mind:

1. Modify the X startup files.

2. Modify .vimrc to add a handler for some common file type, such
as .txt, that runs the malware when the user edits a file of that
type.

3. I believe something similar to #2 could be done for emacs.

I'm sure you can find a bunch more in this vein using other programs.
There are a ton of ways to leave little booby traps to run code as the
user goes about their business.

These won't run right at login time, but that's almost never necessary.

...


> Any malicious code needs to make sure it can run again in future, if
> propagation is to continue.

It depends on how it propagates. Code that propagates via scanning a
large number of addresses looking for vulnerable machines needs to keep
running. Code that propagates by, say, sending an email that exploits a
problem in an email client, only needs to run once, long enough to send
an email to everyone it can find listed in your address book.

> Machines that aren't servers are likely to get rebooted in the
> reasonably near future, especially if they're being attacked by
> malicious code that may be causing crashes and other issues.

Don't count on this. My Macs, for example, are not servers, but they
only get rebooted when there is a software update that requires a
reboot. When I'm not using them, I don't turn them off. I use sleep
instead. This is quite common among Mac users, and I think it will
become the norm among PC users, too, as PC hardware gets better at
sleeping, and PC operating systems get better at dealing with it.

--
--Tim Smith

Tim Smith

unread,
Jan 22, 2006, 4:56:09 PM1/22/06
to
In article <dqu3jo$1b7$1$830f...@news.demon.co.uk>,

JPB <news{@}europa{.}demon{.}co{.}uk> wrote:
> Peter wrote:
> > By the way, I have created a specific 'user' on my system just for
> > internet banking so security is maintained even if someone manages to get
> > at my usual 'user' configurations.
>
> Haven't yet felt the need to do that, but it makes a lot of sense. Perhaps I
> should take that view regarding any closed source software I use, such as
> Skype, since such things offer a potential route for spyware. Then I could
> use only free/libre software on my normal user identity, and run any
> closed-source apps that I persuade myself are really necessary in its own
> separated-off user environment, where it couldn't touch anything else even
> if it wanted to.

That's not as good as Peter's approach. The way you mention basically
helps protect against things put in by the software vendor (e.g.,
spyware). It doesn't protect against bugs in open source software, and
it is bugs in open source software that have been the biggest source of
Linux holes.

Hence, separating things by function, as Peter is doing with banking, is
much better than separating them by origin/type of code.


--
--Tim Smith

Robert Newson

unread,
Jan 22, 2006, 5:04:03 PM1/22/06
to
ray wrote:

In my logs I SEE malware attacking my Linux box all the time...it's just
that the malware is trying to get into a Windwos box...

Jim Richardson

unread,
Jan 22, 2006, 4:30:06 PM1/22/06
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


Pretty much spot on.

Linux is far less vulnerable than MS-Windows, but that doesn't make it
*in*vulnerable, and lets face it, crowing that you're safer than
MS-Windows isn't exactly aspiring very high...

Oh look, I can lift more weights than an anemic nine yearold boy, go
me.

We need to push Linux to improve in all areas, even security, and
stability. MS sure isn't going provide the impetus for LInux to improve
in those areas.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)

iD8DBQFD0/led90bcYOAWPYRAn31AJ45ekbbv0qPQkpn9pny0h3pGj7vGgCgqjRq
7QBgYXsbrYbUZMk+YLSv8bs=
=IZmD
-----END PGP SIGNATURE-----

--
Jim Richardson http://www.eskimo.com/~warlock
When the DM smiles, it's already too late.

Robert Newson

unread,
Jan 22, 2006, 5:12:50 PM1/22/06
to
Erik Funkenbusch wrote:

...


>>><sarcasm>Wait, Linux users are always telling me about their 20 year
>>>uptimes. Are you suggesting Linux users might actually shut down their
>>>machines? </sarcasm>
>>>
>>Any malicious code needs to make sure it can run again in future, if
>>propagation is to continue. Machines that aren't servers are likely to get
>>rebooted in the reasonably near future, especially if they're being
>>attacked by malicious code that may be causing crashes and other issues.
>
> One more time. .Profile. Do I have to say it again?

Errrm, nothing in my .profile ever gets run whenever I log in. You can say
it as many times as you like, but it won't change the fact that nothing in
my .profile is executed any time I log in.

Perhaps you can explain what .profile has got to do with /bin/csh?

Robert Newson

unread,
Jan 22, 2006, 5:19:53 PM1/22/06
to
JPB wrote:

...


> Without a ~/.profile sounds nearer to what I was after. Two situations come
> to mind:
> 1) Larger scale multi-user, where the user does *not* own the computer, but
> the administrators may well take the view that what the user does in the
> home directory is up to them. This isn't really the case I'm referring to,
> as in that case you'll have administrators managing the thing, depending
> what security policy they element.
> 2) Smaller scale - not necessarily one user/one machine, but likely with few
> users, no administrator as such, and quite likely the user(s) do in fact own
> the whole computer, so can do with it as they please, not just the home
> directory but all of it. There's lots and lots of these, and more of them
> are running Linux as time passes.

You missed situation Number 3:

3) user doesn't login/use [ba]sh, but some other shell, eg csh, which /does
NOT use/ the .profile file at all.

JPB

unread,
Jan 22, 2006, 5:25:39 PM1/22/06
to
Tim Smith wrote:

I'll probably start doing either or both, as necessary, to be honest. I
don't currently use Internet banking, and don't immediately see another
sensitive function that I would want to separate, otherwise I expect I
would do so.

--
JPB

Linønut

unread,
Jan 22, 2006, 9:25:43 PM1/22/06
to
After takin' a swig o' grog, Jim Richardson belched out this bit o' wisdom:

> We need to push Linux to improve in all areas, even security, and
> stability. MS sure isn't going provide the impetus for LInux to improve
> in those areas.

The problem with software engineering today is that it isn't. Well, it
is sometimes, but the costs seem to high to the coders and to the people
that direct them.

Our group recently achieved CMM level 3 (so we're halfway to the max
level, 5). This means we have pretty good control over what we do, with
a lot of auditing, inspecting, unit-testing, metrics, and
cross-checking. Yet we still don't put much effort into writing secure
code, or even air-tight code. Fortunately, at this time our code runs
only on an isolated network.

--
Wean yourself from the Microsoft teat!

Aragorn

unread,
Jan 23, 2006, 1:44:11 AM1/23/06
to
On Sunday 22 January 2006 10:11, Erik Funkenbusch stood up and spoke the

following words to the masses in /comp.os.linux.advocacy...:/

> On Sun, 22 Jan 2006 03:57:20 GMT, Aragorn wrote:


>
>>> never heard of .profile? That ensures that any time a user logs in,
>>> certain settings get set, which can include running any program.
>>> Now you COULD make .profile a root owned account write inaccessible
>>> to the
>>> user, but honestly, how often does that happen? Users want to edit
>>> their .profile.
>>
>> Two points here...
>
> Well, you could use something like chattr to make the file immutable,
> but your point stands. There are several ways to make programs start
> when the user logs in. Bailo's arguments here are spurious.

/JPB/ is Bailo? Woops, I must have missed that... ;-)

>>> Rooting a Windows box didn't used to be an easy thing to do either.
>>> What happens, is you get large scale groups of users working on the
>>> problem, and before you know it you've goot tons of root exploit
>>> code floating around. Metasploit, for example, serves this purpose
>>> (though Metasploit is also useful for security professionals too).
>>
>> It has *always* been fairly easy to root a Windows box in comparison
>> to a GNU/Linux machine, even if only because of the lax default
>> security set-up.
>
> I use the term "root" here to refer to taking over a machine remotely.

Oh, I see. Well, for me it doesn't necessarily mean "remotely", but
I'll keep your wording into account here. ;-)

> No,it hasn't "always" been easy. Yes, trojans and the like have


> always been pretty easy, but exploiting buffer overflows used to be a
> difficult thing to do, but is now ridiculously simple because there
> are now toolkits to generate buffer overflow code.

I know, and it is frightening how many of those pre-made tools are
freely available to the public... :-/

>>> In other words, you "hope" that they don't want to damage the data.
>>> Even if they don't damage it, they can certainly look at it, and
>>> send that data back to whomever they want. usernames, passwords,
>>> banking information, whatever.. Further, a virus can be written to
>>> run for weeks before doing damage, allowing itself maximum spread
>>> time.
>>
>> I'm afraid you are overlooking the fact that malicious code does not
>> run itself in UNIX the way that is possible in Windows via the
>> execution of e-mail attachments - granted, in Microsoft's own e-mail
>> clients - and that the diversity of client applications such as
>> browsers or office suits - even if OpenOffice seems the prevalent
>> suit on GNU/Linux and Solaris - makes any attempt to spread such
>> viruses statistically less effective.
>
> You're also overlooking the fact that we're talking about a specific
> vulnerability that allows arbitrary remote execution of data. The
> flaw in Konqueror's javascript parser.

I was indeed speaking more in generic terms, yes...

>> It also already imposes another problem to the virus makers, i.e.
>> which application to target? On Windows, that's fairly easy: aim at
>> IE and OE, and you'll have the greatest effect.
>
> You're under the impression that an attacker can only attack one app.

No, not at all. I apologize if that is what you interpreted from what I
wrote.

> That's totally not true. Blended attacks are common these days. They
> can combine attacks for hundreds of programs into one virus.

Yes, but _will_ they? That is the question. To do so would require a
meticulousness and thoroughness not typically found amongst the
willpower of most humans.

>> The weakness of monoculture...
>
> not being one doesn't make you immune.

No, not immune. It does however even out the odds more.

>>> There are two basic classes of attacker these days. script kiddies,
>>> and professionals. The script kiddies probably get a kick out of
>>> deleting everyones data.
>>
>> My experience with /scr1pt/ /k1dd13s/ is that conducting DDoS attacks
>> using zombies are what they get off on the most. Harassment is their
>> game.
>
> Indeed.
>
>> They also typically target Windows more because that is what they use
>> themselves and what they know. They're typically far less bright
>> than what they pretend to be, as you could notice yourself from the
>> flurries of forged quotes launched through anonymizing gateways by
>> disposable and unique poster names on this group.
>
> No, they target Windows because it gives them the most bang for the
> buck.

It is always possible and very likely even that if and when GNU/Linux
becomes more prevalent, it will be targeted by these villains to a much
higher degree. At present however - while there is truth in what you
say - I do believe that they target Windows more because it is what
they themselves use and know.

Of course, they are aware that Windows is the most prevalent platform,
and that most of the cracking/harassment tools available - they are not
bright enough to write up any such tools themselves - are indeed aimed
at Windows - there are also plenty of such tools aimed at attacking IRC
networks, by the way.

So yes, they know they'll be most successful when aiming for Windows.
But then again, pseudo-statistically we could also say that the
corporate and individualistic origin of the Windows environment breeds
such villains, whereas people in the GNU/Linux community really do have
somewhat of a community spirit and a mutual respect.

All of the above is of course related to script kiddies and in no way
represents "professional" crackers. They have other priorities and
other motives, although they too are aware of the better security of
GNU/Linux - and UNIX in general - as well as the market share of
Windows.

>>> The professionals, however, are looking for zombies that they can
>>> use as spam gateways or for other reasons. You don't have to be
>>> root to be a spam gateway.
>>
>> A typical UNIX user possesses enough knowledge of computer systems to
>> recognize any unauthorized processes running on his machine, and in
>> order for spam e-mail to not leave any traces after having been sent,
>> it would have to run from a different e-mail application than the one
>> the user is normally using.
>
> That argument only holds so long as "normal" users don't start using
> Linux.

While this is true, I myself am not awaiting the big breakthrough of
having GNU/Linux on every home or office desktop.

Not that I want to feed the "GNU/Linux is for geeks"-prejudice any more
than it already has been, but the nature of the operating system itself
does elicit that those who install it out of their own free will and
are not yet IT-savvy enough will either start becoming more savvy or
will abandon GNU/Linux and return to whatever they were using before
that - which often leads back to Windows or the MacIntosh, of course.

I personally do see the GNU/Linux market share grow, but I don't expect
any extreme take-overs of the market to be happening anytime soon - or
late, for that matter. The very simply reason here also being that
GNU/Linux does indeed require some knowledgeability, some determination
and some commitment - not to mention discipline - from the user.

GNU/Linux is like giving a person a few free pilot lessons. There are
those who go along to satisfy their curiosity, there are those who are
interested in learning how to fly, and there are those who prefer
sitting on their couch, away and safe from all those airplanes.

Those who accept the free flying lessons may bail out after the first
lessons have been given, because learning more would require some
financial commitment - in GNU/Linux terms, this means commitment to
learning to administer their system; the financial aspect was just a
metaphor - and there are those who are willing to learn how to fly
either way.

The latter are the ones who will be savvy enough to run their GNU/Linux
operating systems and administer them in a responsible way. The former
are the ones most likely to wipe out GNU/Linux again. Those who stay
at home and stay clear of the airplanes are the ones who will never
make the transition.

I advocate GNU/Linux because I believe in the superior quality of this
operating system. I advocate Free (and Open Source) Software because I
believe in the nobility and fairness of this software development
model, and because it's a little realistic and tangible touch of the
Utopian dream I have for society.

I don't advocate either of them out of a belief that they can or will
overtake any markets. I consider any gain in marketshare a bonus. ;-)

Erik Funkenbusch

unread,
Jan 23, 2006, 2:27:56 AM1/23/06
to
On Mon, 23 Jan 2006 06:44:11 GMT, Aragorn wrote:

>> Well, you could use something like chattr to make the file immutable,
>> but your point stands. There are several ways to make programs start
>> when the user logs in. Bailo's arguments here are spurious.
>
> /JPB/ is Bailo? Woops, I must have missed that... ;-)

Well, I didn't check the headers. And, since Bailo can be a nymshyfter,
and JPB appeared out of nowhere, it was a logical conclusion, given that he
has at least two of the initials of bailo in his name.

>> No,it hasn't "always" been easy. Yes, trojans and the like have
>> always been pretty easy, but exploiting buffer overflows used to be a
>> difficult thing to do, but is now ridiculously simple because there
>> are now toolkits to generate buffer overflow code.
>
> I know, and it is frightening how many of those pre-made tools are
> freely available to the public... :-/

I think it would be even more frightening if those tools were only
available to the black hats.

>>> It also already imposes another problem to the virus makers, i.e.
>>> which application to target? On Windows, that's fairly easy: aim at
>>> IE and OE, and you'll have the greatest effect.
>>
>> You're under the impression that an attacker can only attack one app.
>
> No, not at all. I apologize if that is what you interpreted from what I
> wrote.
>
>> That's totally not true. Blended attacks are common these days. They
>> can combine attacks for hundreds of programs into one virus.
>
> Yes, but _will_ they? That is the question. To do so would require a
> meticulousness and thoroughness not typically found amongst the
> willpower of most humans.

As i said, blended attacks are common these days. Nimda had a number of
attack vectors. It's not really that far of a stretch for some simple code
to be used in the initial attack. That code would then analyze the system
to determine what OS, versions, installed programs, etc.. were present,
then download custom tailored attacks to the machine, possibly even
compiling them locally from source.

>>> The weakness of monoculture...
>>
>> not being one doesn't make you immune.
>
> No, not immune. It does however even out the odds more.

I don't believe that's true. Just because you're immune to plane crashes
because you don't fly doesn't mean you won't get hit by a bus crossing the
street.

> It is always possible and very likely even that if and when GNU/Linux
> becomes more prevalent, it will be targeted by these villains to a much
> higher degree. At present however - while there is truth in what you
> say - I do believe that they target Windows more because it is what
> they themselves use and know.

So, even given your argument, it stands to reason that as Linux becomes
more popular, those misguided users would then be familiar with Linux.

Ray Ingles

unread,
Jan 23, 2006, 10:10:45 AM1/23/06
to
On 2006-01-21, billwg <bi...@twcf.rr.com> wrote:
> The whole premise is that it takes a better hacker to compromise a linux
> system if it is not being used with root privilege. So the casual
> hacker doesn't bother with you, just the serious hackers? That's a real
> comfort, nut! LOL!!!

No, the premise is that, if *propogation* is more difficult, then the
incidence of malware will be reduced. In particular, if the rate of
propogation is below a certain level, attempted infections will die out
rather than grow and spread.

The general idea is well established in epidemiology. Change the word
"malware" in the passage above to, say, "retrovirus" and no doctor would
find it at all remarkable. There are really only two possible lines of
objection - either (a) the biological metaphor does not apply to computer
networks, or (b) it applies, there really isn't a difference in
propogation difficulty.

Objection (a) doesn't really work - too many security researches find
biological mataphors too useful for it not to be valid in at least some
ways. From the very beginning of computing, people have recognized the
fundamental relationships. Von Neummman, for example.

So, you go after (b), in a backhanded way, trying to obscure the real
point. But your objection is silly on its face. The most prevalent form
of malware today is the automatically-propogating kind - precisely
*because* it doesn't require active human intervention to spread.
Malware that requires skilled human attention to operate is, by its very
nature, far less widespread, in no small part because there is always a
shortage of that commodity.

Businesses and wealthy individuals need to worry about clever
attackers. Most people don't, because the payoff isn't generally worth
the time involved, from the attacker's perspective. It's only when the
barriers to propagation are low that automated attacks become cost
effective. Raising those barriers has a direct effect on the prevalence
of general-purpose attacks.

--
Sincerely,

Ray Ingles (313) 227-2317

"It's not easy being green. It takes way more food coloring
than you'd think." - Louis Hochman

Ray Ingles

unread,
Jan 23, 2006, 11:00:45 AM1/23/06
to
On 2006-01-21, Erik Funkenbusch <er...@despam-funkenbusch.com> wrote:
>> We also know that with Linux, that would usually only be the first step to
>> taking over a machine, as the exploit code would typically only execute
>> with user privilege, limiting what it could do.
>
> Apart from everything else, you seem to think that a 2 staged attack is
> improbable. This is pretty far from the case. We've seen lots of
> "blended" attacks, like nimda, where it tries dozens of different
> multi-stage attacks. Nothing is stopping someone from writing a worm that
> gets in via a small user flaw, then downloads potentially hundreds or
> thousands of different local root exploits to tray and gain advantage of
> the root account.

No, it's not impossible at all. But the level of difficulty is quite a
bit higher, which reduces the pool of talent that can successfully pull
one off. Secondly, with Linux the possible set of attack vectors must be
*dramatically* broadened compared to Windows. On Windows, if there's a
mailserver, it's vastly more likely to be Exchange vs. anything else. An
attack can be quite effective just targeting a flaw in Exchange.

On Linux, off the top of my head I can think of sendmail, smail, and
postfix. I know there are more, but I don't feel like looking them up.
An email attack against Linux would be at least three times more
difficult to arrange (in practice, much more so, because all of them are
quite different internally and code reuse would *at best* be minimal).

Now, there are extremely widespread apps on Linux that a virus could
count on being present and be reasonably effective at spreading - e.g.
Firefox. But the vast majority of those don't run with root privileges,
and therefore are not useful for the second stage of an attack. The ones
that are both common and run as root receive significant security
attention - e.g. cron, sshd, etc. Flaws in these are fixed quickly, and
with modern automated update systems, those fixes propagate quickly. The
window isn't terribly wide for those things. (Contrast that with
Windows, where pirated copies that are never given an update are common.
There's no such problem with Linux.)

Also note that *each* of those "hundreds or thousands of different
local root exploits" must be developed by a human. Nimda used "dozens",
by your report, which is at least an order of magnitude less than what
you are proposing. It also almost *requires* a "plugin" model to write
them, and a common interface for the exploits to use with the worm
'infrastructure'. This, however, implies a common feature that can be
scanned for by automated tools...

Finally, note that part of JPB's point was that you don't have to have
a zero rate of infection for a worm to fail to spread. You just need a
low enough rate that the spread doesn't become exponential. Considering
the difficulties in writing a widespread Linux worm (arising from
multiple sources - platform diversity, user/root separation, sensible
separation of executable and data formats), it does seem likely that
rates of infection would be too low to sustain epidemics in most cases.

Certainly that's been the case so far. There *are* quite a large number
of Linux webservers out there - not the majority, necessarily, but
enough to make a good target for a worm writer. And yet, there have been
few attempts, and so far all have fizzled.

>> ==> Ensure that it and/or any downloaded "little friends" get to
>> automatically execute again when the computer restarts. <==
>
> never heard of .profile? That ensures that any time a user logs in,
> certain settings get set, which can include running any program. Now you
> COULD make .profile a root owned account write inaccessible to the user,
> but honestly, how often does that happen? Users want to edit their
> .profile.

Well, others have pointed out that .profile isn't a perfect example,
but I will agree with you that it's a good one. The problem is, without
patching the operating system, it's much harder to hide such recurring
programs from automated scanning tools...

> Rooting a Windows box didn't used to be an easy thing to do either. What
> happens, is you get large scale groups of users working on the problem, and
> before you know it you've goot tons of root exploit code floating around.
> Metasploit, for example, serves this purpose (though Metasploit is also
> useful for security professionals too).

Well, technically, rooting a Windows box wasn't necessary at all
before NT. :-> With the advent of widespread broadband connections,
though, malware got a business model and more effort started being put
into suborning security, I agree. However, they did go for the
low-hanging fruit of Windows...

>> What about the user data? Of course our putative malicious code is able to
>> access or damage the user's data, which is important for the user in a way
>> that the OS files are not. However, damaging data is a bad tactic for
>> malicious code to use;
>
> In other words, you "hope" that they don't want to damage the data.

No. That is not even close to what he said. What he said was, "if
malware wants to spread, damaging user data is a very bad tactic to
use". This is because it will be noticed, and acted upon. This is seen
in the epidemiology field all the time - infections that start out
extremely agressive and harmful evolve relatively quickly to milder
forms. A host that's dead is usually useless for spreading to other
hosts.

> Even if they don't damage it, they can certainly look at it, and send
> that data back to whomever they want.

That's a serious risk indeed, but it's not the same thing as the FUD
that's usually spread around about viruses deleting user data.

> Further, a virus can be written to run for weeks before doing
> damage, allowing itself maximum spread time.

Where's the profit motive?

--
Sincerely,

Ray Ingles (313) 227-2317

Executives get to pick ONE:
A) I'm a total idiot. I had no idea what my company was doing - I'm not
liable for the company's misdeeds.
B) I knew exactly what was going on. That's what I'm paid for. I'm
personally responsible for the acts of my company.
If A, they forfeit all their money and assets as reparation for fraud.
If B, they face criminal penalties.

Now, ask yourself, why AREN'T executives held to this standard?

Ray Ingles

unread,
Jan 23, 2006, 11:06:10 AM1/23/06
to
On 2006-01-21, JPB <news{@}europa{.}demon{.}co{.}uk> wrote:
> However, I don't quite get this, because when I look at the behaviour of
> actual virus/worm/trojan attacks against Windows, that *isn't* all that
> successful and automated malicious code needs to do; getting to execute
> [once] as a local user is only the start for malicious code attacking
> Linux, with further obstacles to overcome in order to propagate
> effectively.

An insightful post, thanks! This is one of the reasons why I think
malware will end up increasing operating system diversity in the future
(google for my post a few months back, "The Keystone Predator").

--
Sincerely,

Ray Ingles (313) 227-2317

Microsoft Windows - The first fully modular software disaster.

The Ghost In The Machine

unread,
Jan 23, 2006, 12:00:07 PM1/23/06
to
In comp.os.linux.advocacy, Peter
<pet...@parazzdise.net.nz>
wrote
on Sun, 22 Jan 2006 08:07:57 +1300
<43d2...@clear.net.nz>:

> billwg wrote:
>
>
>> The whole premise is that it takes a better hacker to
>> compromise a linux system if it is not being used with
>> root privilege. So the casual hacker doesn't bother
>> with you, just the serious hackers? That's a real
>> comfort, nut! LOL!!!
>
> Except that while it is fairly easy for malware writers
> to take over armies of Windows boxes by mass production
> techniques, it is far, far more difficult to take over
> armies of Linux machines at root level. This requires
> a more complex human interactive approach, especially
> to get privilege elevation once a user account has been
> compromised.
>

This is not to say they won't try, though. Port 22, if
open, will be attacked. (Successfully? Probably not,
unless one uses a very dumb password on their box. But I
did have to enter Yet Another Entry into my /etc/hosts.deny
blacklist over the weekend.)

I'll admit it's not even close to the number of port 445 pings
I get, though. I get hundreds of those per day.

--
#191, ewi...@earthlink.net
It's still legal to go .sigless.

The Ghost In The Machine

unread,
Jan 23, 2006, 12:00:07 PM1/23/06
to
In comp.os.linux.advocacy, ray
<r...@zianet.com>
wrote
on Sat, 21 Jan 2006 10:33:36 -0700
<pan.2006.01.21....@zianet.com>:

> I realize that these possibilities exist. MS proponents tell me we only
> have 'security by obscurity'. My point - OK, so what? We still do not SEE
> malware attacking *nix systems - good enough for me.
>

I'd have to look but suspect Apache/Unix systems have been
compromised in the past -- the Mitnick worm and the Li0n
virus come to mind. It's just far far easier to get into
a Windows box, and hackers need to be more intelligent in
order to compromise Linux systems.

The Li0n worm in particular only worked on a specific
version of Apache sitting on a RedHat box, and not for
all that long.

But now is not the time for complacency, on either system.

Linønut

unread,
Jan 23, 2006, 12:47:41 PM1/23/06
to
After takin' a swig o' grog, The Ghost In The Machine belched out this bit o' wisdom:

> This is not to say they won't try, though. Port 22, if
> open, will be attacked. (Successfully? Probably not,
> unless one uses a very dumb password on their box. But I
> did have to enter Yet Another Entry into my /etc/hosts.deny
> blacklist over the weekend.)

I moved ssh off of port 22, and I've switch to public-key authentication
(which still requires a pass-phrase, but with ssh-add I only have to
provide it once, not each time I log in remotely.)

The Ghost In The Machine

unread,
Jan 23, 2006, 4:00:11 PM1/23/06
to
In comp.os.linux.advocacy, Linønut
<linøn...@bone.com>
wrote
on Mon, 23 Jan 2006 11:47:41 -0600
<wPednaEeZ5I...@comcast.com>:

They'll probably port scan you anyway. :-) But that might work,
for awhile; the main problem is that if one moves to the
wrong port the ISP might block it off for its own protection,
depending on what ISP one is using. For example, it would be
fairly stupid to move ssh to port 137.

I'll admit to wondering what's on ports 214 through 219,
221 through 244, 246 to 344, 390 through 405 and 407
through 426, though. There's gaps all over /etc/services.

*scratches head*

Interestingly, no hacker's tried port 631 yet. I guess CUPS/IPP isn't
that attractive a target... :-)

billwg

unread,
Jan 23, 2006, 5:45:33 PM1/23/06
to

"Ray Ingles" <sorc...@localhost.localdomain> wrote in message
news:slrndt9sqa....@localhost.localdomain...

>
> So, you go after (b), in a backhanded way, trying to obscure the real
> point. But your objection is silly on its face. The most prevalent
> form
> of malware today is the automatically-propogating kind - precisely
> *because* it doesn't require active human intervention to spread.
> Malware that requires skilled human attention to operate is, by its
> very
> nature, far less widespread, in no small part because there is always
> a
> shortage of that commodity.
>
> Businesses and wealthy individuals need to worry about clever
> attackers. Most people don't, because the payoff isn't generally worth
> the time involved, from the attacker's perspective. It's only when the
> barriers to propagation are low that automated attacks become cost
> effective. Raising those barriers has a direct effect on the
> prevalence
> of general-purpose attacks.
>
Sort of my point exactly, Ray. Linux is no more secure and probably
less secure than Windows in the areas where security is actually
important. Where things are a mere nuisance, as with the great majority
of these "viruses", the risk is very low and the common methods for
avoiding the malware are very effective. I personally only experienced
a virus once and that was 7 years ago when I ran a copy of an EXE that I
actually wrote originally that was sent back to me from a friend who was
having trouble running it. It took several hours to clean everything up
using Symantec's Norton AV which I was not using at the time but have
used ever since.

That type of malware is now not so much more than spam and shows an
immaturity on the part of the perpetrator but is not a real cause of
alarm.


Erik Funkenbusch

unread,
Jan 23, 2006, 6:26:53 PM1/23/06
to
On Mon, 23 Jan 2006 17:00:07 GMT, The Ghost In The Machine wrote:

> This is not to say they won't try, though. Port 22, if
> open, will be attacked. (Successfully? Probably not,
> unless one uses a very dumb password on their box. But I
> did have to enter Yet Another Entry into my /etc/hosts.deny
> blacklist over the weekend.)
>
> I'll admit it's not even close to the number of port 445 pings
> I get, though. I get hundreds of those per day.

Funny, but I get a lot of bots trying hundreds of different usernames on my
ssh logins. This is a daily occurance, sometimes multiple times per day.

Jan 22 15:37:11 xxxxxx sshd[25000]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!
Jan 22 15:37:12 xxxxxx sshd[25002]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!
Jan 22 15:37:14 xxxxxx sshd[25004]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!
Jan 22 15:37:14 xxxxxx sshd[25006]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!
Jan 22 15:37:15 xxxxxx sshd[25008]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!
Jan 22 15:37:16 xxxxxx sshd[25010]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!
Jan 22 15:37:17 xxxxxx sshd[25012]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!
Jan 22 15:37:18 xxxxxx sshd[25014]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!
Jan 22 15:37:18 xxxxxx sshd[25016]: Illegal user carol from ::ffff:64.202.123.216
Jan 22 15:37:18 xxxxxx sshd[25016]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!
Jan 22 15:37:19 xxxxxx sshd[25018]: Illegal user cesar from ::ffff:64.202.123.216
Jan 22 15:37:19 xxxxxx sshd[25018]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!
Jan 22 15:37:20 xxxxxx sshd[25020]: Illegal user clark from ::ffff:64.202.123.216
Jan 22 15:37:20 xxxxxx sshd[25020]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!
Jan 22 15:37:21 xxxxxx sshd[25022]: Illegal user clinton from ::ffff:64.202.123.216
Jan 22 15:37:21 xxxxxx sshd[25022]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!
Jan 22 15:37:21 xxxxxx sshd[25024]: Illegal user kayla from ::ffff:64.202.123.216
Jan 22 15:37:21 xxxxxx sshd[25024]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!
Jan 22 15:37:22 xxxxxx sshd[25026]: Illegal user russ from ::ffff:64.202.123.216
Jan 22 15:37:22 xxxxxx sshd[25026]: Address 64.202.123.216 maps to unknown.ord.scnet.net, but this does not map back to the address - POSSIBLE BREAKIN ATTEMPT!

rex.b...@gmail.com

unread,
Jan 23, 2006, 6:47:20 PM1/23/06
to
The thing to keep in mind is that the security standards adopted by
Linux were based on those used for UNIX. Furthermore, Linux quickly
found new levels of security and even provided options to exceed the
security of traditional Unix implementations on the same hardware. For
example, prior to Linux, the common practice was to route bogus traffic
to a dummy address, or a loopback address, to force it to death. Linux
contributors developed "ipchains" which could filter input and output
ports. You could filter incoming traffic to processes as well. Most
modern firewalls are based on this model, which has since been renamed
iptables (with some additional modifications).

SSL was based on some open source specifications which were implemented
by Netscape and contributed to Linux in the form of the SSL library.
The Linux and FreeBSD communities expanded this and created ssh as well
as stunnel and later ipsec. SSH and STUNNEL provided the ability to
expose a single server with encrypted ports, which had to have matching
public keys and exchanged data using DES or Triple-DES to create tunnel
systems to multiple servers using a security package so strict and
effective that even the NSA got nervous.

Kerberos had been available for Linux for many years, even before Linux
was released. The disturbing feature of Kerberos is that it
effectively "changed the passwords" every few seconds, usually between
30 seconds and 5 minutes. These could be used to validate and
distribute secure encryption keys as well, again making the system
almost "too secure" for the NSA and other intelligence agencies.

Microsoft provided a "back door" by offering to let unregulated
non-government certificate authorities manage the distribution and
authentication of public and private keys, they also provided a unique
identifier in the kerberos protocol that could be used to track and
trace use of kerberos authentication. Again, since the kerberos keys
were being issued and certified by Microsoft, it was possible for
corporate interestes to "wire tap" into these systems without warrants.

Ironically, Linux can be configured both ways. Linux can be configured
as "too secure", in which case someone who really wants to scan your
traffic for terrorism, pedophelia, drug deals (now considered
terrorism), tax evasion, or child support evasion - will get really
upset with you.

As a result, most Linux administrators "play nice" and use Verisign or
Thawte group for authentication and certificates. It really just
depends on how secure you really want to be. :D

Keep in mind that technology designed for and on UNIX was designed to
survive a nuclear holocaust, resist the most malicious hackers, and
protect critical systems including power distribution control
equipment, telecommunications equipment, transportation control
equipment, and even interplanetary missions and orbital spy
sattellites. The really hard-core stuff is still written in ADA, but
*nix can give that stuff a pretty good run for the money.

Of course, there are still some hard fast rules to security, like -
don't download software from unknown sources, don't run it unless you
know exactly how it was created and how you got it, and don't run an
untested application as root - test unknown or untrusted applications
in a restricted account with a chroot setting so it can't do any "real"
damage.

That doesn't mean that you won't have people who will change the
defaults and deliberately expose a server. Which is why Linux and Unix
also have auditing and accounting history that can be used to catch the
perpetrators in the act.

Really, one of the big reasons why people don't like to mess with *nix
systems, is because experienced hackers know that there is a very good
chance that any persistent hacking will net them a visit to jail.
Mitnik was one of the extreme cases - held 14 years without trial,
because they could prove that he had hacked into a computer and stolen
valuable information, but the laws regarding the rules of evidence had
not been drafted to allow the search of his computer to be admitted.

The irony is that the particular crime which he was accused of
committing only carried a 5 year penalty.

The Ghost In The Machine

unread,
Jan 23, 2006, 7:00:05 PM1/23/06
to
In comp.os.linux.advocacy, billwg
<bi...@twcf.rr.com>
wrote
on Mon, 23 Jan 2006 22:45:33 GMT
<h0dBf.13857$Zj7....@tornado.tampabay.rr.com>:

Interesting. So 5,500 hits on port 445 per week are merely a "nuisance"?

GreyCloud

unread,
Jan 23, 2006, 7:11:05 PM1/23/06
to
Erik Funkenbusch wrote:

Maybe its K-man trying to break in.

The Ghost In The Machine

unread,
Jan 23, 2006, 8:00:15 PM1/23/06
to
In comp.os.linux.advocacy, Erik Funkenbusch
<er...@despam-funkenbusch.com>
wrote
on Mon, 23 Jan 2006 17:26:53 -0600
<1nnjmrix...@funkenbusch.com>:

> On Mon, 23 Jan 2006 17:00:07 GMT, The Ghost In The Machine wrote:
>
>> This is not to say they won't try, though. Port 22, if
>> open, will be attacked. (Successfully? Probably not,
>> unless one uses a very dumb password on their box. But I
>> did have to enter Yet Another Entry into my /etc/hosts.deny
>> blacklist over the weekend.)
>>
>> I'll admit it's not even close to the number of port 445 pings
>> I get, though. I get hundreds of those per day.
>
> Funny, but I get a lot of bots trying hundreds of different usernames on my
> ssh logins. This is a daily occurance, sometimes multiple times per day.
>

[logs snipped]

Maybe your ISP's subnet is more infested than my ISP's is.
I can't say I know, but somebody's definitely out there
doing Naughty Things(tm).

(I have a perl script processing my logs to count them,
and am throwing things into a PostgreSQL database. The
number of records per week vary.)

Peter Jensen

unread,
Jan 23, 2006, 8:35:18 PM1/23/06
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

The Ghost In The Machine wrote:

> Interestingly, no hacker's tried port 631 yet. I guess CUPS/IPP isn't
> that attractive a target... :-)

Well, it does default to listening only to localhost. It's only when
using CUPS as a network printer that things get interesting.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFD1YRUd1ZThqotgfgRApPuAKDL9F70JHPuzObheFfgpoq8Ip/YogCdG9h3
4f3TN9houIJUjodvGjh5VW8=
=ytAJ
-----END PGP SIGNATURE-----
--
PeKaJe

"That's one small step for Fry..." -Fry
"...and one giant line for admission." -stranger in line

Aragorn

unread,
Jan 23, 2006, 10:55:20 PM1/23/06
to
On Tuesday 24 January 2006 00:26, Erik Funkenbusch stood up and spoke

the following words to the masses in /comp.os.linux.advocacy...:/

> On Mon, 23 Jan 2006 17:00:07 GMT, The Ghost In The Machine wrote:


>
>> This is not to say they won't try, though. Port 22, if
>> open, will be attacked. (Successfully? Probably not,
>> unless one uses a very dumb password on their box. But I
>> did have to enter Yet Another Entry into my /etc/hosts.deny
>> blacklist over the weekend.)
>>
>> I'll admit it's not even close to the number of port 445 pings
>> I get, though. I get hundreds of those per day.
>
> Funny, but I get a lot of bots trying hundreds of different usernames
> on my ssh logins. This is a daily occurance, sometimes multiple times
> per day.

<snip log>

I do of course not know whether your system is of a high profile - by
which I mean that it has a registered domain name attached to its IP
address - but I have seen similar attacks on our (public) server
somewhere in the course of last year. Actually, it started late 2004
and went on for about three or four months.

They are obviously using some kind of password generator, because if you
look at the timestamps, you will see two or more login attempts per
second.

My advice to you would therefore be to have /pam/ instate a certain
delay in between each sequence of three incorrect login attempts. This
should discourage them somewhat and make it harder for them to try.

I've taken the liberty of /tracerouting/ the IP address. See the output
below...:

traceroute to 64.202.123.216 (64.202.123.216), 30 hops max, 38 byte
packets
1 dD5769001.access.telenet.be (213.118.144.1) 25.128 ms 7.748 ms
20.022 ms
2 * * *
3 213.224.126.98 (213.224.126.98) 43.119 ms 17.052 ms 7.065 ms
4 brx-b3-geth6-0.telia.net (213.248.73.113) 6.972 ms 11.928 ms
7.712 ms
5 brx-b1-pos1-0.telia.net (213.248.72.225) 5.989 ms 7.832 ms 5.918
ms
6 ldn-bb1-pos0-0-0.telia.net (213.248.65.193) 12.981 ms 12.851 ms
10.942 ms
7 nyk-bb1-pos0-2-0.telia.net (213.248.65.90) 78.897 ms 80.783 ms
109.760 ms
8 chi-bb1-pos6-0-0-0.telia.net (213.248.80.153) 99.868 ms 101.667 ms
104.810 ms
9 so0-2-0-250.j2.ord.scnet.net (64.202.110.69) 110.950 ms 113.868 ms
107.810 ms
10 so2-1-0.j1.ord.scnet.net (205.234.158.81) 114.781 ms 115.668 ms
111.092 ms
11 tagged.b2.ord.scnet.net (64.202.111.114) 116.610 ms 114.887 ms
108.664 ms
12 vps12.servershost.net (205.234.145.236) 107.809 ms 116.670 ms
114.963 ms
13 unknown.ord.scnet.net (64.202.123.216) 109.758 ms 112.792 ms
107.857 ms

However, a ping on the last subdomain - /unknown.ord.scnet.net/ - yields
an "Unknown host". Also note that the 12th hop has an IP from another
range than hops 11 and 13.

I've tried a graphical /xtraceroute/ as well, and it seems to be an IP
address in Europe, although there is always a possibility that this is
incorrect. What I do know is that /telia.net/ is an ISP in Sweden.

Hmm... More weirdness... I've just tried to ping /scnet.net/ and I keep
getting "Unknown host".

Okay, then we'll try a /whois.../ - pretty cool that us GNU/Linux users
can do that from within a commandline terminal, huh? :-þ

*Note:* Not to cause the Server Central Network any spam overhead, I'm
spoofing the e-mail addresses below by adding three fake TLD's, which
your human brain will be able to discern, but which a bot won't.

*****
Registration Service Provided By: Server Central Network
Contact: hostm...@servercentral.net
Visit: http://www.servercentral.net

Domain name: scnet.net

Administrative Contact:
Server Central Network
Customer Owned Domain (hostm...@servercentral.net.removethis)
+1.3128291111
Fax: +1.3128291110
209 West Jackson
Suite 700
Chicago, IL 60606
US

Technical Contact:
Server Central Network
Customer Owned Domain (hostm...@servercentral.net.getreal)
+1.3128291111
Fax: +1.3128291110
209 West Jackson
Suite 700
Chicago, IL 60606
US

Registrant Contact:
Server Central Network
Customer Owned Domain (hostm...@servercentral.net.areyoukidding)
+1.3128291111
Fax: +1.3128291110
209 West Jackson
Suite 700
Chicago, IL 60606
US

Status: Locked

Name Servers:
ns1.scservers.com
ns2.scservers.com

Creation date: 03 May 2003 01:06:10
Expiration date: 03 May 2006 01:06:10
Whois-Services: 04c2e7f2-eeaa-4dd3...@whois-services.com
*****

The domain is "locked", which means that it's secured against
transferring of the domain to another registrar. It will however
expire in about 6 weeks...

They also don't seem to be hosting any website. I get a 404 error
trying to browse to http://www.scnet.net.

Definitely something fishy going on there... If I were you, I'd contact
the /hostmaster.../ He may - and probably won't - reply to your
complaint, but the odds are quite big that they'll start an audit of
their systems and run into the rootkits, or at least change their
passwords.

Hope this helps... ;-)

Aragorn

unread,
Jan 23, 2006, 11:05:28 PM1/23/06
to
On Tuesday 24 January 2006 01:00, The Ghost In The Machine stood up and

spoke the following words to the masses in /comp.os.linux.advocacy...:/

> In comp.os.linux.advocacy, billwg <bi...@twcf.rr.com> wrote


> on Mon, 23 Jan 2006 22:45:33 GMT
> <h0dBf.13857$Zj7....@tornado.tampabay.rr.com>:
>>

>> Sort of my point exactly, Ray. Linux is no more secure and probably
>> less secure than Windows in the areas where security is actually
>> important.

I can't believe what I'm seeing...! Is /billwg/ really *that* clueless
and a devout, blinded-by-faith Microsoft zealot, or is he fully aware
that he's lying through his teeth?

The only people _professionally_ involved with information technology
who would dare to make such a ludicrous statement are all working as
senior staff at Microsoft Corp!

Bill Weisberger - I hope I spell that correctly - is either severely
brainwashed, or else he is one of the brainwashers themselves. Nobody
else in their right mind would ever make such a stupid remark.

P.S.: If he *is* indeed a liar - as opposed to being brainwashed - then
why the Hell are people still bothering to reply to him? Even in terms
of Windows advocacy, the guy is a joke!

<shake head>

Jesse F. Hughes

unread,
Jan 24, 2006, 1:36:32 AM1/24/06
to
rex.b...@gmail.com writes:

> Kerberos had been available for Linux for many years, even before Linux
> was released. The disturbing feature of Kerberos is that it
> effectively "changed the passwords" every few seconds, usually between
> 30 seconds and 5 minutes. These could be used to validate and
> distribute secure encryption keys as well, again making the system
> almost "too secure" for the NSA and other intelligence agencies.

Wow.

> Microsoft provided a "back door" by offering to let unregulated
> non-government certificate authorities manage the distribution and
> authentication of public and private keys, they also provided a
> unique identifier in the kerberos protocol that could be used to
> track and trace use of kerberos authentication. Again, since the
> kerberos keys were being issued and certified by Microsoft, it was
> possible for corporate interestes to "wire tap" into these systems
> without warrants.

The curs!

I *knew* it. They are so totally evil. Totally.

Thanks, Rex!

--
Jesse F. Hughes
"Doesn't pay to lie if you aren't good at it."
-- Captain Friday, /City of the Dead/
Adventures by Morse radio show

DFS

unread,
Jan 24, 2006, 2:22:49 AM1/24/06
to
Jesse F. Hughes wrote:
>
> Wow.

>
> The curs!
>
> I *knew* it. They are so totally evil. Totally.
>
> Thanks, Rex!


I don't think Rex understands sarcasm.


Linønut

unread,
Jan 24, 2006, 7:20:00 AM1/24/06
to
After takin' a swig o' grog, Aragorn belched out this bit o' wisdom:

> Bill Weisberger - I hope I spell that correctly - is either severely
> brainwashed, or else he is one of the brainwashers themselves. Nobody
> else in their right mind would ever make such a stupid remark.
>
> P.S.: If he *is* indeed a liar - as opposed to being brainwashed - then
> why the Hell are people still bothering to reply to him? Even in terms
> of Windows advocacy, the guy is a joke!

He is good practice in grabbing hold of a greased pig, if I may use
metaphor.

William Poaster

unread,
Jan 24, 2006, 8:14:11 AM1/24/06
to
Once upon a Tue, 24 Jan 2006 04:05:28 +0000 dreary, as I laboured tired &
weary, came a tapping at my door when Aragorn posted this, & nothing
more...

> On Tuesday 24 January 2006 01:00, The Ghost In The Machine stood up and
> spoke the following words to the masses in /comp.os.linux.advocacy...:/
>
>> In comp.os.linux.advocacy, billwg <bi...@twcf.rr.com> wrote on Mon, 23
>> Jan 2006 22:45:33 GMT
>> <h0dBf.13857$Zj7....@tornado.tampabay.rr.com>:
>>>
>>> Sort of my point exactly, Ray. Linux is no more secure and probably
>>> less secure than Windows in the areas where security is actually
>>> important.

*BLINK*


> I can't believe what I'm seeing...! Is /billwg/ really *that* clueless
> and a devout, blinded-by-faith Microsoft zealot, or is he fully aware that
> he's lying through his teeth?

Incredible, isn't it....<shaking head>

> The only people _professionally_ involved with information technology who
> would dare to make such a ludicrous statement are all working as senior
> staff at Microsoft Corp!
>
> Bill Weisberger - I hope I spell that correctly - is either severely
> brainwashed, or else he is one of the brainwashers themselves. Nobody
> else in their right mind would ever make such a stupid remark.
>
> P.S.: If he *is* indeed a liar - as opposed to being brainwashed - then
> why the Hell are people still bothering to reply to him? Even in terms of
> Windows advocacy, the guy is a joke!
>
> <shake head>

After reading him for a couple of months, after he first joined the group,
I binned him. I thought he was a puerile at the best of times,
brainwashed...perhaps, paid to say stupid things....maybe, but that
statement of his, above, left me speechless. Unbelievable.

--
The majority of wintrolls DO know the
difference between their ass & their elbows,
because they cannot talk out of their elbows.

Ray Ingles

unread,
Jan 24, 2006, 9:05:41 AM1/24/06
to
On 2006-01-23, billwg <bi...@twcf.rr.com> wrote:
>> It's only when the barriers to propagation are low that automated
>> attacks become cost effective. Raising those barriers has a direct
>> effect on the prevalence of general-purpose attacks.
>>
> Sort of my point exactly, Ray. Linux is no more secure and probably
> less secure than Windows in the areas where security is actually
> important.

But the evidence does not bear that out at all. There's yet another
worm for Windows today (Nyxem), but there just aren't any for Linux.
Apparently the viral propagation barriers for Linux *are* higher than
for Windows, since there just isn't any of any significance on the
Linux side.

> That type of malware is now not so much more than spam and shows an
> immaturity on the part of the perpetrator but is not a real cause of
> alarm.

Malware that causes damage (like Nyxem) is, as JPB pointed out, a
short-lived nuisance (though more of 'em keep getting written - they
can't be ignored as a class). You're completely ignoring the
profit-driven malware that is the primary threat now.

http://www.enterprisenetworkingplanet.com/netsecur/article.php/3579411
http://www.theinquirer.net/?article=29100
http://cnews.canoe.ca/CNEWS/TechNews/TechInvestor/2006/01/02/1376143-cp.html

--
Sincerely,

Ray Ingles (313) 227-2317

"One of the main reasons for the downfall of the Roman
Empire was that, lacking zero, they had no way to indicate
successful termination of their C programs." - Robert Firth

chrisv

unread,
Jan 24, 2006, 9:18:00 AM1/24/06
to
Proven liar billwg wrote:

>Sort of my point exactly, Ray. Linux is no more secure and probably
>less secure than Windows in the areas where security is actually
>important.

LOL!!! You've told some whoppers in your day, billwg, but this has
GOT to take the cake.

Sinister Midget

unread,
Jan 24, 2006, 10:20:07 AM1/24/06
to
On 2006-01-24, Aragorn <str...@telenet.invalid> posted something concerning:

> On Tuesday 24 January 2006 01:00, The Ghost In The Machine stood up and
> spoke the following words to the masses in /comp.os.linux.advocacy...:/
>
>> In comp.os.linux.advocacy, billwg <bi...@twcf.rr.com> wrote
>> on Mon, 23 Jan 2006 22:45:33 GMT
>> <h0dBf.13857$Zj7....@tornado.tampabay.rr.com>:
>>>
>>> Sort of my point exactly, Ray. Linux is no more secure and probably
>>> less secure than Windows in the areas where security is actually
>>> important.
>
> I can't believe what I'm seeing...! Is /billwg/ really *that* clueless
> and a devout, blinded-by-faith Microsoft zealot, or is he fully aware
> that he's lying through his teeth?

Door #2. It's not even questionable.

--
Ruland: Innovative Microsoft peer-to-peer software.

The Ghost In The Machine

unread,
Jan 24, 2006, 11:00:05 AM1/24/06
to
In comp.os.linux.advocacy, Aragorn
<str...@telenet.invalid>
wrote
on Tue, 24 Jan 2006 03:55:20 GMT
<IyhBf.176877$S02.6...@phobos.telenet-ops.be>:

I for one would think sshd already has that facility.
The controls are a little crude, but basically,
if I set MaxAuthTries to 6 (the default setting) in
/etc/ssh/sshd_config, then after half those tries -- 3 -- a
message is sent to SYSLOG. Messages, or variants thereof,
then show up in one's logs:

Jan 22 13:28:58 hostname sshd[12502]: Invalid user administrator from
213.50.3.6

I for one would prefer an option to log every failed
logging attempt, but this works for the most part.
It depends on how sophisticated the pepperbot [*] is;
I've seen both kinds. (The pepperbot of the second kind
shows up as bandwidth but no logs, since it presumably
varies the name tried on every attempt, on that particular
connection. The pepperbot might multiply connect as well,
although I can't prove this without having a sniffer up or
examining my logs in more detail; my logs suggest that it's
a single connection. Of course, pepperbots don't need much
-- the only real complicated part is the SSL/TLS layer,
which is widely available; after that, the pepperbot then
walks through a list of username/passwords, apparently.
I could probably set one up without trouble in Perl --
although there's no real point for me doing so.)

Interesting. A VPN, perhaps?

>
> I've tried a graphical /xtraceroute/ as well, and it seems to be an IP
> address in Europe, although there is always a possibility that this is
> incorrect. What I do know is that /telia.net/ is an ISP in Sweden.
>
> Hmm... More weirdness... I've just tried to ping /scnet.net/ and I keep
> getting "Unknown host".
>
> Okay, then we'll try a /whois.../ - pretty cool that us GNU/Linux users

> can do that from within a commandline terminal, huh? :-ş

I get a name resolution failure. This is not a 404. Of
course certain brain-dead browsers have trouble reporting
the difference... :-)

>
> Definitely something fishy going on there... If I were you, I'd contact
> the /hostmaster.../ He may - and probably won't - reply to your
> complaint, but the odds are quite big that they'll start an audit of
> their systems and run into the rootkits, or at least change their
> passwords.
>
> Hope this helps... ;-)
>

[*] not to be confused with an ancient adversary of the
Gallifreyan TimeLords. I name it thus because it
peppers the victim's machines with requests; it's
not a DoS but shows up clearly as traffic.

The Ghost In The Machine

unread,
Jan 24, 2006, 11:00:10 AM1/24/06
to
In comp.os.linux.advocacy, Aragorn
<str...@telenet.invalid>
wrote
on Tue, 24 Jan 2006 04:05:28 GMT
<cIhBf.176890$by.67...@phobos.telenet-ops.be>:

> On Tuesday 24 January 2006 01:00, The Ghost In The Machine stood up and
> spoke the following words to the masses in /comp.os.linux.advocacy...:/
>
>> In comp.os.linux.advocacy, billwg <bi...@twcf.rr.com> wrote
>> on Mon, 23 Jan 2006 22:45:33 GMT
>> <h0dBf.13857$Zj7....@tornado.tampabay.rr.com>:
>>>
>>> Sort of my point exactly, Ray. Linux is no more secure and probably
>>> less secure than Windows in the areas where security is actually
>>> important.
>
> I can't believe what I'm seeing...! Is /billwg/ really *that* clueless
> and a devout, blinded-by-faith Microsoft zealot, or is he fully aware
> that he's lying through his teeth?

He's certainly not up on "Orange Book" or its modern equivalent
(whose name escapes me...grrr.) Of course, security by
obscurity does work -- for awhile. How many users know
the hostnames of the machines on the NSA's internal network?
Probably not many. :-) (Certainly not me.)

As I understand it, NT was C2-certified on a machine whose
floppy was epoxied shut, and disconnected from the Internet.
Not exactly the most usable of configs although very nice
for a marketing-tick.

And C2 is not exactly the highest certification anyway. There's
probably an A1 or A2 machine out there. I'll admit I'm
wondering if there's an EAL7 machine out there. I'm not
in that particular industry (and probably lack the paranoia
to enter it :-) ).

>
> The only people _professionally_ involved with information technology
> who would dare to make such a ludicrous statement are all working as
> senior staff at Microsoft Corp!

One wonders if they're still using SLIME for builds.

>
> Bill Weisberger - I hope I spell that correctly -

Google isn't horribly helpful here so I for one can't say.

> is either severely
> brainwashed, or else he is one of the brainwashers themselves. Nobody
> else in their right mind would ever make such a stupid remark.

It would help if Windows didn't open six rather vulnerable
ports straight out of the box. :-) Nevertheless, once a
port is opened, detrius can flow in; one hopes the computer
can filter out the worst of it.

>
> P.S.: If he *is* indeed a liar - as opposed to being brainwashed - then
> why the Hell are people still bothering to reply to him? Even in terms
> of Windows advocacy, the guy is a joke!
>

I for one would prefer credible information to insults or silence.

> <shake head>

billwg

unread,
Jan 24, 2006, 12:50:28 PM1/24/06
to

"The Ghost In The Machine" <ew...@sirius.tg00suus7038.net> wrote in
message news:846ga3-...@sirius.tg00suus7038.net...

>
> Interesting. So 5,500 hits on port 445 per week are merely a
> "nuisance"?
>
Does it slow you down or something? I've never looked, but I've not
noticed anything wrong either.


Erik Funkenbusch

unread,
Jan 24, 2006, 1:12:02 PM1/24/06
to
On 24 Jan 2006 09:05:41 -0500, Ray Ingles wrote:

> On 2006-01-23, billwg <bi...@twcf.rr.com> wrote:
>>> It's only when the barriers to propagation are low that automated
>>> attacks become cost effective. Raising those barriers has a direct
>>> effect on the prevalence of general-purpose attacks.
>>>
>> Sort of my point exactly, Ray. Linux is no more secure and probably
>> less secure than Windows in the areas where security is actually
>> important.
>
> But the evidence does not bear that out at all. There's yet another
> worm for Windows today (Nyxem), but there just aren't any for Linux.
> Apparently the viral propagation barriers for Linux *are* higher than
> for Windows, since there just isn't any of any significance on the
> Linux side.

Actually, it's not a worm. It's a trojan with viral capabiliites. A worm
moves from system to system without user interaction. Nyxem requires users
to execute a trojan, which then sends itself to other users.

It's nothing more than a social engineering attack.

Ray Ingles

unread,
Jan 24, 2006, 1:22:00 PM1/24/06
to
On 2006-01-24, Erik Funkenbusch <er...@despam-funkenbusch.com> wrote:
>> But the evidence does not bear that out at all. There's yet another
>> worm for Windows today (Nyxem), but there just aren't any for Linux.
>> Apparently the viral propagation barriers for Linux *are* higher than
>> for Windows, since there just isn't any of any significance on the
>> Linux side.
>
> Actually, it's not a worm. It's a trojan with viral capabiliites. A worm
> moves from system to system without user interaction. Nyxem requires users
> to execute a trojan, which then sends itself to other users.

It also tries to spread by remote shares, and creates a scheduled task
to run itself on the remote machine, so it's *also* a worm.

http://www.f-secure.com/v-descs/nyxem_e.shtmlo

Half a million infections and growing... many people will be sad Feb
3rd. But it won't be an apparently eternal threat like Nimda or Code Red
which are *still* out there years later.

Which was JPB's point.

--
Sincerely,

Ray Ingles (313) 227-2317

Modern deductive method: 1) Devise hypothesis. 2) Apply
for grant. 3) Perform experiments. 4) Revise hypothesis.
5) Backdate revised hypothesis. 6) Publish.

The Ghost In The Machine

unread,
Jan 24, 2006, 3:00:04 PM1/24/06
to
In comp.os.linux.advocacy, billwg
<bi...@twcf.rr.com>
wrote
on Tue, 24 Jan 2006 17:50:28 GMT
<ENtBf.15057$Zj7....@tornado.tampabay.rr.com>:

It saps a tiny part of my bandwidth and increases the risk
for my personal setup. Presumably $EMPLOYER is getting
far more hits -- I'm in a relative backwater here at home,
a DSL account on Earthlink.

In any event, 445 is one of many ports under attack. I've
loaded the records into a database and am analyzing them on
an occasional basis.

Of course, c'est la vie...there are risks even when
handling ordinary fourth-class bulk-grade postage --
the USPS's term for "junk mail"; for starters, it might
be contaminated with something such as anthrax powder,
a thin but lethal oil of some sort that sticks to the
outside, a greeting card that contains a radio trigger
for a high-powered nearby bomb hidden in a car (and one
thought those cards that played tinny music were *cute!*),
or even radioactive paper.

(Used to be that one would occasionally get free samples
of detergent powder. I've not seen any lately, have you? :-) )

One could also send packets of peanuts (or coat the
letter with peanut oil) through the mail to a certain
small contingent of individuals who would get *very*
sick just breathing the scent therefrom.

(I wish I were kidding. Fortunately, I'm not allergic to peanuts.
Unfortunately, there are a few out there who are. One
could get even cuter and try to discern an oil that is fatal
to me, but not to you, or fatal to you but not me. The
CIA/NSA love this sort of thing, presumably.)

Of course those might lead to my death, but not
compromisation of my personal data (not that I'd care too
much after I kicked the bucket anyway, but my heirs might).
That's an additional risk that one might received from
contaminated emails, bad links that purport to be porn
(well, in a way, they are a rather nasty sort of porn --
they violate one's computer), or filtered viruses which
are indications that the ISP is either on the job or at
least trying to protect its users from something that
looks dangerous to it; it's of little consequence to me,
actually.

And, occasionally, something comes along that is indeed pure junk. :-)

Welcome to the New World Order.

JPB

unread,
Jan 27, 2006, 5:04:13 PM1/27/06
to
Ray Ingles wrote:

> On 2006-01-21, JPB <news{@}europa{.}demon{.}co{.}uk> wrote:
>> However, I don't quite get this, because when I look at the behaviour of
>> actual virus/worm/trojan attacks against Windows, that *isn't* all that
>> successful and automated malicious code needs to do; getting to execute
>> [once] as a local user is only the start for malicious code attacking
>> Linux, with further obstacles to overcome in order to propagate
>> effectively.
>
> An insightful post, thanks! This is one of the reasons why I think
> malware will end up increasing operating system diversity in the future
> (google for my post a few months back, "The Keystone Predator").
>

One other thing occurs regarding this whole ease or difficulty of
propagation; if it's difficult to automate propagation then it's difficult
to compromise thousands or tens of thousands of boxes and form a botnet.

And one thing botnets can do very well is to provide an easy route of widely
distributing subsequent release of new malware. If the zombied computer is
also still being used by the owner, it may also be providing new targets,
such as email addresses to spam, as well.

If propagation is reduced by better operating system design and security,
then it becomes more difficult to establish large botnets, maybe even
impossible if not enough boxes can be zombied and kept that way. That helps
protect everyone, since if new malware has less of a way to get an initial
foothold, then it has less of an opportunity to start spreading in the
first place.

--
JPB

0 new messages