Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

obviscating python code for distribution

85 views
Skip to first unread message

Littlefield, Tyler

unread,
May 15, 2011, 10:04:16 PM5/15/11
to pytho...@python.org
Hello all:
I have been considering writing a couple of programs in Python, but I
don't want to distribute the code along with them. So I'm curious of a
couple things.
First, does there exist a cross-platform library for playing audio
files, whose license I would not be violating if I do this?
Second, would I be violating the twisted, wxpython licenses by doing this?
Finally, is there a good way to accomplish this? I know that I can make
.pyc files, but those can be disassembled very very easily with the
disassembler and shipping these still means that the person needs the
modules that are used. Is there another way to go about this?

--

Take care,
Ty
my website:
http://tds-solutions.net
my blog:
http://tds-solutions.net/blog
skype: st8amnd127
“Programmers are in a race with the Universe to create bigger and better idiot-proof programs, while the Universe is trying to create bigger and better
idiots. So far the Universe is winning.”
“If Java had true garbage collection, most programs would delete themselves upon execution.”

Daniel Kluev

unread,
May 15, 2011, 10:21:00 PM5/15/11
to pytho...@python.org
On Mon, May 16, 2011 at 1:04 PM, Littlefield, Tyler <ty...@tysdomain.com> wrote:
> Hello all:

> Finally, is there a good way to accomplish this? I know that I can make .pyc
> files, but those can be disassembled very very easily with the disassembler
> and shipping these still means that the person needs the modules that are
> used. Is there another way to go about this?

No, there is no way to prevent users from getting access to raw python
sources. By its nature and design, python is not meant to be used this
way, and even obfuscation would not harm readability much.
However, you can write all parts you want to hide in C/C++/Cython and
distribute them as .so/.dll

--
With best regards,
Daniel Kluev

James Mills

unread,
May 15, 2011, 10:36:56 PM5/15/11
to python list
On Mon, May 16, 2011 at 12:21 PM, Daniel Kluev <dan....@gmail.com> wrote:
> No, there is no way to prevent users from getting access to raw python
> sources. By its nature and design, python is not meant to be used this
> way, and even obfuscation would not harm readability much.
> However, you can write all parts you want to hide in C/C++/Cython and
> distribute them as .so/.dll

Or you could do what everyone else is doing
and provide your "application" as a service in some manner.

cheers
James

--
-- James Mills
--
-- "Problems are solved by method"

Ben Finney

unread,
May 15, 2011, 11:29:49 PM5/15/11
to
"Littlefield, Tyler" <ty...@tysdomain.com> writes:

> I have been considering writing a couple of programs in Python, but I
> don't want to distribute the code along with them.

This topic has been raised many times before, and there is a response
which is now common but may sound harsh:

What is it you think you would gain by obfuscating the code, and why is
that worthwhile? What evidence do you have that code obfuscation would
achieve that?

> Finally, is there a good way to accomplish this? I know that I can
> make .pyc files, but those can be disassembled very very easily with
> the disassembler and shipping these still means that the person needs
> the modules that are used. Is there another way to go about this?

Not really, no. You would be best served by critically examining the
requirement to obfuscate the code at all.

--
\ “Leave nothing to chance. Overlook nothing. Combine |
`\ contradictory observations. Allow yourself enough time.” |
_o__) —Hippocrates |
Ben Finney

Littlefield, Tyler

unread,
May 15, 2011, 11:36:53 PM5/15/11
to pytho...@python.org
I'm putting lots of work into this. I would rather not have some script
kiddy dig through it, yank out chunks and do whatever he wants. I just
want to distribute the program as-is, not distribute it and leave it
open to being hacked.

On 5/15/2011 9:29 PM, Ben Finney wrote:
> "Littlefield, Tyler"<ty...@tysdomain.com> writes:
>
>> I have been considering writing a couple of programs in Python, but I
>> don't want to distribute the code along with them.
> This topic has been raised many times before, and there is a response
> which is now common but may sound harsh:
>
> What is it you think you would gain by obfuscating the code, and why is
> that worthwhile? What evidence do you have that code obfuscation would
> achieve that?
>
>> Finally, is there a good way to accomplish this? I know that I can
>> make .pyc files, but those can be disassembled very very easily with
>> the disassembler and shipping these still means that the person needs
>> the modules that are used. Is there another way to go about this?
> Not really, no. You would be best served by critically examining the
> requirement to obfuscate the code at all.
>


--

Take care,

harrismh777

unread,
May 15, 2011, 11:48:03 PM5/15/11
to
Littlefield, Tyler wrote:
> I'm putting lots of work into this. I would rather not have some script
> kiddy dig through it, yank out chunks and do whatever he wants. I just
> want to distribute the program as-is, not distribute it and leave it
> open to being hacked.

Protection via obfuscation is invalid practically as well as
philosophically. Those of us who work in the free software movement (or
the open software movement too) specifically believe that obfuscation is
an incorrect approach.

Obfuscation is the paramount Microsoft strategy for protection and for
security. It doesn't work. In fact, making the code open permits what
many of us who consider open source to be 'good science' more secure by
allowing peer review and community improvement.

Some of us believe that code is not useful unless its open. If I can't
see what you're doing, comment on it, improve it if I like, and share it
with others I don't need it (its really that simple).

Nobody can make this decision for you, of course, but please consider
making your coding free software (GPL license), or at least open and GPL
compatible licensed.

kind regards,
m harris


Steven D'Aprano

unread,
May 16, 2011, 12:03:09 AM5/16/11
to
On Sun, 15 May 2011 21:36:53 -0600, Littlefield, Tyler wrote:

> I'm putting lots of work into this. I would rather not have some script
> kiddy dig through it, yank out chunks and do whatever he wants.


The best way to do that is to labour in obscurity, where nobody either
knows or cares about your application. There are hundreds of thousands,
possibly millions, of such applications, with a user base of one: the
creator.

One other alternative is to ask yourself, what's the marginal value of
yanking out chunks from my code? What harm does it do me if Joe Haxor
spends hours pulling out one subroutine, or a dozen, from my app, and
using them in his app? Why should I care?

It never ceases to amaze me how often people write some trivial
application, like a thousand others, or even some trivial function or
class, and then treat it like the copyright to Mickey Mouse. I don't know
what your application is, or how it works. It's conceivable that it's the
next Microsoft Office. But my advice to you is to take a pragmatic,
realistic view of the cost of copyright infringement.

If it's going to cost you $1000 in extra effort to prevent $100 of harm,
it's simply not worth it.

> I just
> want to distribute the program as-is, not distribute it and leave it
> open to being hacked.

Right... because of course we all know how Windows being distributed
without source code makes it soooooo secure.

You are conflating two different issues:

* Can people "steal" or copy my ideas and code?

* Can people hack my code (in the bad sense)?


I hope this does not offend, because I mean it in the nicest possible
way, but if you think that not distributing source code will prevent your
code from being broken, then you are delusional.

Look at Facebook and its periodic security holes and accounts being
hacked. Not only don't Facebook distribute source code, but they don't
distribute *anything* -- their application is on their servers, behind a
firewall. Does it stop hackers? Not a chance.


--
Steven

Ben Finney

unread,
May 16, 2011, 12:10:11 AM5/16/11
to
"Littlefield, Tyler" <ty...@tysdomain.com> writes:

> I'm putting lots of work into this. I would rather not have some
> script kiddy dig through it, yank out chunks and do whatever he wants.
> I just want to distribute the program as-is, not distribute it and
> leave it open to being hacked.

How do these arguments apply to your code base when they don't apply to,
say, LibreOffice or Linux or Python or Apache or Firefox?

How is your code base going to be harmed by having the source code
available to recipients, when that demonstrably doesn't harm countless
other code bases out there?

--
\ “Let others praise ancient times; I am glad I was born in |
`\ these.” —Ovid (43 BCE–18 CE) |
_o__) |
Ben Finney

Chris Angelico

unread,
May 16, 2011, 12:40:17 AM5/16/11
to pytho...@python.org
On Mon, May 16, 2011 at 2:03 PM, Steven D'Aprano
<steve+comp....@pearwood.info> wrote:
> The best way to do that is to labour in obscurity, where nobody either
> knows or cares about your application. There are hundreds of thousands,
> possibly millions, of such applications, with a user base of one: the
> creator.

And I'm sure Steven will agree with me that this is not in any way a
bad thing. I've written hundreds of such programs myself (possibly
thousands), and they have all served their purposes. On a slightly
larger scale, there are even more programs that have never left the
walls of my house, having been written for my own family - not because
I'm afraid someone else will steal them, but because they simply are
of no value to anyone else. But hey, if anyone wants a copy of my code
that's basically glue between [obscure application #1] and [obscure
application #2] that does [obscure translation] as well to save a
human from having to do it afterwards, sure! You're welcome to it! :)

However, I do not GPL my code; I prefer some of the other licenses
(such as CC-BY-SA), unless I'm working on a huge project that's not
meant to have separate authors. For something that by and large is one
person's work, I think it's appropriate to give attribution. But
discussion of exactly _which_ open source license to use is a can of
worms that's unlikely to be worth opening at this stage.

Chris Angelico

Littlefield, Tyler

unread,
May 16, 2011, 1:41:23 AM5/16/11
to pytho...@python.org
Hello:
Thanks all for your information and ideas. I like the idea of open
source; I have a fairly large (or large, by my standards anyway) project
that I am working on that is open source.

Here's kind of what I want to prevent. I want to write a multi-player
online game; everyone will essentually end up connecting to my server to
play the game. I don't really like the idea of security through
obscurity, but I wanted to prevent a couple of problems.
1) First I want to prevent people from hacking at the code, then using
my server as a test for their new setups. I do not want someone to gain
some extra advantage just by editing the code.
Is there some other solution to this, short of closed-source?
Thanks,

James Mills

unread,
May 16, 2011, 2:00:12 AM5/16/11
to python list
On Mon, May 16, 2011 at 3:41 PM, Littlefield, Tyler <ty...@tysdomain.com> wrote:
> Here's kind of what I want to prevent. I want to write a multi-player online
> game; everyone will essentually end up connecting to my server to play the
> game. I don't really like the idea of security through obscurity, but I
> wanted to prevent a couple of problems.
> 1) First I want to prevent people from hacking at the code, then using my
> server as a test for their new setups. I do not want someone to gain some
> extra advantage just by editing the code.
> Is there some other solution to this, short of closed-source?

As I mentioned before (which I don't think you quite got)...

Write your "game" for the "web".
Write is as a SaaS (Software as a Service) - even if it's free and open source.

Chris Angelico

unread,
May 16, 2011, 2:12:36 AM5/16/11
to pytho...@python.org
On Mon, May 16, 2011 at 3:41 PM, Littlefield, Tyler <ty...@tysdomain.com> wrote:
> Here's kind of what I want to prevent. I want to write a multi-player online
> game; everyone will essentually end up connecting to my server to play the
> game. I don't really like the idea of security through obscurity, but I
> wanted to prevent a couple of problems.
> 1) First I want to prevent people from hacking at the code, then using my
> server as a test for their new setups. I do not want someone to gain some
> extra advantage just by editing the code.
> Is there some other solution to this, short of closed-source?

1) If you're worried about people getting hold of the code that's
running on your server, that's a server security issue and not a
Python obscurity issue (if they get the code, they can run it no
matter how obscured it is).

2) Was there a problem 2? :)

As James Mills said, just leave it on the server and then you don't
have to give out the source (and by "don't have to", I mean ethically,
legally, and technically).

You may want to give some thought to scaleability of your code; Google
told their staff to avoid Python for things that are going to get
hammered a lot (although it's possible that Google's idea of "a lot"
is five orders of magnitude more than you'll ever get!!). But if your
game world puts a hard limit on its own load (eg if players are on a
50x50 board and you know you can handle 2500 simultaneous players),
you won't have a problem.

Also, Python doesn't really cater to servers that want to have their
code updated on the fly; I'm sure you could work something out using a
dictionary of function objects, but otherwise you're stuck with
bringing the server down to do updates. That's considered normal in
today's world, but I really don't know why... downtime is SO last
century!

Chris Angelico
happily running servers on fully open source stacks

Littlefield, Tyler

unread,
May 16, 2011, 2:17:15 AM5/16/11
to pytho...@python.org
>Write your "game" for the "web".
>Write is as a SaaS (Software as a Service) - even if it's free and
open source.
I understood you loud and clear. And that makes a lot of assumptions on
my game and the design. I don't really care to host this over the web. I
want a
centralized server that would perform the logic, where I can offload the
playing of sounds (through a soundpack that's already installed) to the
client-side.
Not only that, but a lot of web technologies that would be used for this
wouldn't really work, as I am doing this for the blind; Flash as well as
a lot
of the popular setups are not very accessible.

Littlefield, Tyler

unread,
May 16, 2011, 2:20:07 AM5/16/11
to pytho...@python.org
Hello:
I wanted to make the client in python, and the server possibly, though
I'm not really sure on that. I was not worried about the code for the
server being stolen, as much as I was worried about people tinkering
with the client code for added advantages. Most of the logic can be
handled by the server to prevent a lot of this, but there are still ways
people could give themselves advantages by altering the client.

James Mills

unread,
May 16, 2011, 2:24:12 AM5/16/11
to python list
On Mon, May 16, 2011 at 4:16 PM, Littlefield, Tyler <ty...@tysdomain.com> wrote:
> I understood you loud and clear. And that makes a lot of assumptions on my
> game and the design. I don't really care to host this over the web. I want a
> centralized server that would perform the logic, where I can offload the
> playing of sounds (through a soundpack that's already installed) to the
> client-side. Not only that, but a lot of web technologies that would be used
> for this wouldn't really work, as I am doing this for the blind; Flash as
> well as a lot of the popular setups are not very accessible.

Funny you should mention this "now" :)
I happen to be blind myself.

Yes I agree Flash is not very accessible (never has been).

Web Standards web apps and such however are quite
accessible!

geremy condra

unread,
May 16, 2011, 3:27:44 AM5/16/11
to ty...@tysdomain.com, pytho...@python.org
On Sun, May 15, 2011 at 10:41 PM, Littlefield, Tyler
<ty...@tysdomain.com> wrote:
> Hello:
> Thanks all for your information and ideas. I like the idea of open source; I
> have a fairly large (or large, by my standards anyway) project that I am
> working on that is open source.
>
> Here's kind of what I want to prevent. I want to write a multi-player online
> game; everyone will essentually end up connecting to my server to play the
> game. I don't really like the idea of security through obscurity, but I
> wanted to prevent a couple of problems.
> 1) First I want to prevent people from hacking at the code, then using my
> server as a test for their new setups. I do not want someone to gain some
> extra advantage just by editing the code.
> Is there some other solution to this, short of closed-source?
> Thanks,

I don't know that closing the source does you much more good than
obfuscating it. The obvious attack surface here is pretty much totally
exposed via network traffic, which any legitimate client can gain
access to. A better approach would be to simply write more secure code
in the first place.

Geremy Condra

Steven D'Aprano

unread,
May 16, 2011, 4:49:12 AM5/16/11
to
On Sun, 15 May 2011 23:41:23 -0600, Littlefield, Tyler wrote:

> Here's kind of what I want to prevent. I want to write a multi-player
> online game; everyone will essentually end up connecting to my server to
> play the game. I don't really like the idea of security through
> obscurity, but I wanted to prevent a couple of problems. 1) First I want
> to prevent people from hacking at the code, then using my server as a
> test for their new setups. I do not want someone to gain some extra
> advantage just by editing the code. Is there some other solution to
> this, short of closed-source? Thanks,

Closed source is not a solution. Please wipe that out of your mind.
People successfully hack closed source applications. The lack of source
is hardly a barrier at all: it's like painting over the door to your
house in camouflage colours so from a distance people won't see it. To a
guy with a network sniffer and debugger, the lack of source is no barrier
at all.

You're trying to solve a hard problem, and by hard, I mean "impossible".
It simply isn't possible to trust software on a machine you don't
control, and pretty damn hard on a machine you do control. To put it in a
nutshell, you can't trust *anything*. See the classic paper by Ken
Thompson, "Reflections on Trusting Trust":

http://cm.bell-labs.com/who/ken/trust.html

Now, in a more practical sense, you might not fear that the operating
system will turn on you, or the Python compiler. Some threats you don't
care about. The threat model you do care about is a much more straight-
forward one: how to trust the desktop client of your game?

Alas, the answer is, you can't. You can't trust anything that comes from
the client until you've verified it is unmodified, and you can't verify
it is unmodified until you can trust the information it sends you. A
vicious circle. You're fighting physics here. Don't think that obscuring
the source code will help.

On-line game servers are engaged in a never-ending arms race against
"punks" who hack the clients. The servers find a way to detect one hack
and block it, and the punks find another hack that goes unnoticed for a
while. It's like anti-virus and virus, or immune systems and germs.

The question you should be asking is not "how do I make this secure
against cheats?", but "how much cheating can I afford to ignore?".

If your answer is "No cheating is acceptable", then you have to do all
the computation on the server, nothing on the client, and to hell with
performance. All your client does is the user interface part.

If the answer is, "Its a MUD, who's going to cheat???" then you don't
have to do anything. Trust your users. If the benefit from "cheating" is
small enough, and the number of cheaters low, who cares? You're not
running an on-line casino for real money.

See also here:

http://web.archiveorange.com/archive/v/bqumydkHsi2ytdsX7ewa


Another approach might be to use psychology on your users. Run one server
for vanilla clients to connect to, and another server where anything
goes. Let the punks get it out of their system by competing with other
punks. Run competitions to see who can beat the most souped up, dirty,
cheating turbo-powered clients, for honour and glory. Name and shame the
punks who cheat on the vanilla server, praise the best cheaters on the
anything-goes machine, and you'll (hopefully!) find that the level of
cheating on the vanilla server is quite low. Who wants to be the low-life
loser who wins by cheating when you can challenge your hacker peers
instead?

(Note: I don't know if this approach ever works, but I know it does *not*
work when real money or glory is involved. Not even close.)

If Blizzard can't stop private servers, rogue clients and hacked
accounts, what makes you think you can?


--
Steven

Chris Angelico

unread,
May 16, 2011, 5:10:20 AM5/16/11
to pytho...@python.org
On Mon, May 16, 2011 at 6:49 PM, Steven D'Aprano
<steve+comp....@pearwood.info> wrote:
> If your answer is "No cheating is acceptable", then you have to do all
> the computation on the server, nothing on the client, and to hell with
> performance. All your client does is the user interface part.
>
> If the answer is, "Its a MUD, who's going to cheat???" then you don't
> have to do anything. Trust your users. If the benefit from "cheating" is
> small enough, and the number of cheaters low, who cares? You're not
> running an on-line casino for real money.

The nearest I've seen to the latter is Dungeons and Dragons. People
can cheat in a variety of ways, but since they're not playing
*against* each other, cheating is rare. As to the former, though...
the amount of computation that you can reliably offload to even a
trusted client is low, so you don't lose much by doing it all on the
server. The most computationally-intensive client-side work would be
display graphics and such, and that's offloadable if and ONLY if
there's no game-sensitive information hidden behind things. Otherwise
someone could snoop the traffic-stream and find out what's behind that
big nasty obstacle, or turn the obstacle transparent, or whatever...
not safe.

There's an old OS/2 game called Stellar Frontier that moves sprites
around on the screen using clientside code, but if there's a bit of
lag talking to the server, you see a ship suddenly yoinked to its new
position when the client gets the latest location data. That's a fair
compromise, I think; the client predicts where the ship "ought to be",
and the server corrects it when it can.

Chris Angelico

Jean-Michel Pichavant

unread,
May 16, 2011, 5:36:02 AM5/16/11
to ty...@tysdomain.com, pytho...@python.org
Littlefield, Tyler wrote:
> Hello:
> Thanks all for your information and ideas. I like the idea of open
> source; I have a fairly large (or large, by my standards anyway)
> project that I am working on that is open source.
>
> Here's kind of what I want to prevent. I want to write a multi-player
> online game; everyone will essentually end up connecting to my server
> to play the game. I don't really like the idea of security through
> obscurity, but I wanted to prevent a couple of problems.
> 1) First I want to prevent people from hacking at the code, then using
> my server as a test for their new setups. I do not want someone to
> gain some extra advantage just by editing the code.
> Is there some other solution to this, short of closed-source?
> Thanks,
>
If your App meet some success, you'll need some help. You'll be able to
get some only if the community grows and has access to your code. If you
want to battle versus hackers, you have already lost (if your app hos no
success, there will be no hacker anyway :o) )
Otherwise I guess that most online games execute all decisions and state
machine transitions at server side, which is the only code you can
trust. The client only forwards user inputs to the server, and display
the resulting effect .

JM

Nobody

unread,
May 16, 2011, 8:05:07 AM5/16/11
to
On Sun, 15 May 2011 23:41:23 -0600, Littlefield, Tyler wrote:

> Here's kind of what I want to prevent. I want to write a multi-player
> online game; everyone will essentually end up connecting to my server to
> play the game. I don't really like the idea of security through
> obscurity, but I wanted to prevent a couple of problems.
> 1) First I want to prevent people from hacking at the code, then using
> my server as a test for their new setups. I do not want someone to gain
> some extra advantage just by editing the code.
> Is there some other solution to this, short of closed-source?

Closed source will not help in the slightest.

What will help is to remember the fundamental rule of client-server
security: Don't Trust The Client. If you don't remember this rule, you
have no security whatsoever, whether the source is open or closed.

Obfuscating the source won't prevent someone from running it under a
modified Python interpreter, or running an unmodified Python interpreter
under a debugger, or with modified DLLs (or even device drivers).

To give just one example, Blizzard has a whole team of people working on
anti-cheating measures, most of which involve installing various pieces of
privacy-invading, security-endangering malware on their customers'
systems. And it still doesn't work.

Grant Edwards

unread,
May 16, 2011, 9:52:46 AM5/16/11
to
On 2011-05-16, Ben Finney <ben+p...@benfinney.id.au> wrote:
> "Littlefield, Tyler" <ty...@tysdomain.com> writes:
>
>> I'm putting lots of work into this. I would rather not have some
>> script kiddy dig through it, yank out chunks and do whatever he wants.
>> I just want to distribute the program as-is, not distribute it and
>> leave it open to being hacked.
>
> How do these arguments apply to your code base when they don't apply to,
> say, LibreOffice or Linux or Python or Apache or Firefox?

One obvious way that those arguments don't apply is that the OP didn't
put lots of work into LibreOffice, Linux, Python, Apache or Firefox
and therefore doesn't have any right to control their distribution.

> How is your code base going to be harmed by having the source code
> available to recipients, when that demonstrably doesn't harm
> countless other code bases out there?

The owner of something is free to determine how it is distributed --
he doesn't have any obligation to prove to you that some particular
method of distribution is harmful to him or anybody else.

--
Grant Edwards grant.b.edwards Yow!
at BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-BI-
gmail.com

Littlefield, Tyler

unread,
May 16, 2011, 10:44:33 AM5/16/11
to pytho...@python.org
>Funny you should mention this "now"
I don't go around parading the info, until I have to.

>Yes I agree Flash is not very accessible (never has been).
>Web Standards web apps and such however are quite
>accessible!
If I was making a browser-based game, yes. As I'm not though...

Anyway, thanks to everyone else who answered this thread. I've not done
much like this besides muds, and all the logic is on the server there, I
think I will build the client in python, open source it for people to
fix/add to if they want and make sure to keep the server as secure as it
can be.

harrismh777

unread,
May 16, 2011, 3:40:39 PM5/16/11
to
Steven D'Aprano wrote:
> To put it in a
> nutshell, you can't trust*anything*. See the classic paper by Ken

> Thompson, "Reflections on Trusting Trust":
>

This is true, but there's another way to put it pro-active---


... expect the client to be untrustworthy.


In other words, write the server code with a protocol that 'expects' the
client to be hacked. Yes, it takes three times the code and at least
five times the work, but its worth it.

What do you do with syn floods?

What do you do with attempted overruns?

What if someone builds a client emulator, just to hammer your protocol
and slow the server down, just for fun...?

You must build your server side 'assuming' that *all* of these things
are going to happen (and more), and then be able to handle them when
they do. That is what makes server-side coding so difficult.

In other words, you build the server in such a way that you can
confidently hand Mr junior cracker your client source code and be
confident that your gaming server is going to be a.o.k.

Many, many, coders don't want to go to all this trouble (and don't)...
mainly because they're just glad if they can get simple sockets to work.
So, they don't handle attempted overruns, or syn flood open attempts, or
other.

One thing to remember (think about this) is whether your server/client
is in a push or pull mode. *Never* allow the client to be in control
(pushing) while your server is passively (pulling). The server must
control everything so that the untrusted client will be *controlled*
regardless of client side hacks.

I realize that this probably means redesign of your server. Do it.

Happy gaming!

m harris


Rhodri James

unread,
May 16, 2011, 6:42:40 PM5/16/11
to
On Mon, 16 May 2011 03:21:00 +0100, Daniel Kluev <dan....@gmail.com>
wrote:

...which is, of course, not exactly secure either. A sufficiently
determined hacker won't have much trouble disassembling a shared library
even if you do strip out all the debug information. By chance I'm having
to do something closely related to this at work just at the moment; it's
hard, but far from impossible.

--
Rhodri James *-* Wildebeest Herder to the Masses

Ben Finney

unread,
May 16, 2011, 8:22:48 PM5/16/11
to
"Littlefield, Tyler" <ty...@tysdomain.com> writes:

> I wanted to make the client in python, and the server possibly, though
> I'm not really sure on that. I was not worried about the code for the
> server being stolen, as much as I was worried about people tinkering
> with the client code for added advantages.

Thank you for making your constraints explicit; that's more than most
people do when asked.

As Steven said, you're trying to solve a problem which is very
difficult, and obfuscating the code won't be of much help. If people
have the program running on their own computers, they can hack it. You
can't stop that, so you have to consider other ways of making it
ineffective.

--
\ “The fact that a believer is happier than a skeptic is no more |
`\ to the point than the fact that a drunken man is happier than a |
_o__) sober one.” —George Bernard Shaw |
Ben Finney

Ben Finney

unread,
May 16, 2011, 8:27:48 PM5/16/11
to
Grant Edwards <inv...@invalid.invalid> writes:

> On 2011-05-16, Ben Finney <ben+p...@benfinney.id.au> wrote:
> > "Littlefield, Tyler" <ty...@tysdomain.com> writes:
> >
> >> I'm putting lots of work into this. I would rather not have some
> >> script kiddy dig through it, yank out chunks and do whatever he
> >> wants. I just want to distribute the program as-is, not distribute
> >> it and leave it open to being hacked.
> >
> > How do these arguments apply to your code base when they don't apply
> > to, say, LibreOffice or Linux or Python or Apache or Firefox?
>
> One obvious way that those arguments don't apply is that the OP didn't
> put lots of work into LibreOffice, Linux, Python, Apache or Firefox

Yet the copyright holders *did* put lots of effort into those works
respectively. So the arguments would apply equally well; which is to
say, they don't.

> > How is your code base going to be harmed by having the source code
> > available to recipients, when that demonstrably doesn't harm
> > countless other code bases out there?
>
> The owner of something is free to determine how it is distributed --
> he doesn't have any obligation to prove to you that some particular
> method of distribution is harmful to him or anybody else.

Note that I didn't say anything about obligation or harm to persons. I
asked only about the code base and the distribution thereof.

In the meantime, Tyler has come back to us with arguments that *do*
differentiate between the above cases and his own. So thanks, Tyler, for
answering the questions.

--
\ “Of course, everybody says they're for peace. Hitler was for |
`\ peace. Everybody is for peace. The question is: what kind of |
_o__) peace?” —Noam Chomsky, 1984-05-14 |
Ben Finney

Ben Finney

unread,
May 16, 2011, 8:30:22 PM5/16/11
to
"Littlefield, Tyler" <ty...@tysdomain.com> writes:

> Anyway, thanks to everyone else who answered this thread. I've not
> done much like this besides muds, and all the logic is on the server
> there, I think I will build the client in python, open source it for
> people to fix/add to if they want and make sure to keep the server as
> secure as it can be.

Sounds like a good approach to me that doesn't treat users as
necessarily hostile.

I wish you good fortune in building a strong community around the game
so that it can defend itself from cheaters, and a free-software client
will IMO promote exactly that.

--
\ “I do not believe in immortality of the individual, and I |
`\ consider ethics to be an exclusively human concern with no |
_o__) superhuman authority behind it.” —Albert Einstein, letter, 1953 |
Ben Finney

alex23

unread,
May 16, 2011, 11:45:55 PM5/16/11
to
"Littlefield, Tyler" <ty...@tysdomain.com> wrote:
> Anyway, thanks to everyone else who answered this thread. I've not done
> much like this besides muds, and all the logic is on the server there, I
> think I will build the client in python, open source it for people to
> fix/add to if they want and make sure to keep the server as secure as it
> can be.

The browser-based game Lacuna Expanse actually open sources the Perl
client for their game, it might be a good place for ideas on how to
approach this: https://github.com/plainblack/Lacuna-Web-Client

The MMO EVE uses Stackless Python for both the client & server. Here's
a slightly older doc detailing their architecture:
http://www.slideshare.net/Arbow/stackless-python-in-eve

Hope this helps.

Dotan Cohen

unread,
May 17, 2011, 2:16:35 AM5/17/11
to Chris Angelico, pytho...@python.org
On Mon, May 16, 2011 at 07:40, Chris Angelico <ros...@gmail.com> wrote:
> And I'm sure Steven will agree with me that this is not in any way a
> bad thing. I've written hundreds of such programs myself (possibly
> thousands), and they have all served their purposes. On a slightly
> larger scale, there are even more programs that have never left the
> walls of my house, having been written for my own family - not because
> I'm afraid someone else will steal them, but because they simply are
> of no value to anyone else. But hey, if anyone wants a copy of my code
> that's basically glue between [obscure application #1] and [obscure
> application #2] that does [obscure translation] as well to save a
> human from having to do it afterwards, sure! You're welcome to it! :)
>
> However, I do not GPL my code; I prefer some of the other licenses
> (such as CC-BY-SA), unless I'm working on a huge project that's not
> meant to have separate authors. For something that by and large is one
> person's work, I think it's appropriate to give attribution. But
> discussion of exactly _which_ open source license to use is a can of
> worms that's unlikely to be worth opening at this stage.
>

Actually, Chris, those applications are probably no less valuable to
be open source than Linux or Firefox. The reason is that when one goes
to learn a new language it is valuable to look at existing real world
code. However, the code available online generally falls into one of
two categories:
1) Simple sample code, which demonstrates a principle or technique
2) Full-blown FOSS application with hundreds of source files and a build

It sounds to me like your home-brew code might be one of the missing
links between the two. It won't be so tiny as to be trivial, but it
won't be so huge as to be beyond the grasp of novices.

I for one would love to look over such code. I'll learn something,
without a doubt. Maybe someone might even spot a bug or make a
suggestion to improve it. And almost invariably, any problem that I've
ever had someone has had first. So while you might have been one of
the first have a need to interface FooWidget with PlasmoidBar, someone
after you will in fact need just the code to do that.

--
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com

Chris Angelico

unread,
May 17, 2011, 2:39:48 AM5/17/11
to pytho...@python.org
On Tue, May 17, 2011 at 4:16 PM, Dotan Cohen <dotan...@gmail.com> wrote:
> Actually, Chris, those applications are probably no less valuable to
> be open source than Linux or Firefox. The reason is that when one goes
> to learn a new language it is valuable to look at existing real world
> code. However, the code available online generally falls into one of
> two categories:
> 1) Simple sample code, which demonstrates a principle or technique
> 2) Full-blown FOSS application with hundreds of source files and a build
>
> It sounds to me like your home-brew code might be one of the missing
> links between the two. It won't be so tiny as to be trivial, but it
> won't be so huge as to be beyond the grasp of novices.

You have a point there. Although I can't guarantee that all my code is
particularly *good*, certainly not what I'd want to hold up for a
novice to learn from - partly because it dates back anywhere up to two
decades, and partly because quite a few of the things I was working
with are completely undocumented!

But if you have Pastel Accounting Version 5, running in a Windows 3.1
virtual session, and you want to export some of its data to a DB2
database, I can help you quite a bit. Assuming you have an OS/2 system
to run it on, of course. (You see what I mean about obscure?) I should
probably dust off some of the slightly-more-useful pieces and put them
up on either The Esstu Pack (my old web site) or rosuav.com (my new
web site, doesn't have any better name than that), but that kinda
requires time, a resource that I don't have an awful lot of. I'm sure
there'll be a few oddments in there where at least one half of the
glue is more useful. Back then, though, I didn't know Python, nor
Pike, nor any of quite a few other awesome languages, but REXX and C++
are at least available open source.

Chris Angelico

D'Arcy J.M. Cain

unread,
May 17, 2011, 9:36:23 AM5/17/11
to Chris Angelico, pytho...@python.org
On Tue, 17 May 2011 16:39:48 +1000
Chris Angelico <ros...@gmail.com> wrote:
> You have a point there. Although I can't guarantee that all my code is
> particularly *good*, certainly not what I'd want to hold up for a
> novice to learn from - partly because it dates back anywhere up to two
> decades, and partly because quite a few of the things I was working
> with are completely undocumented!

Sounds like a perfect reason to open source it. If what you say is
true it could benefit you more than others, at least at the beginning.
Remember, open source is a two way street.

--
D'Arcy J.M. Cain <da...@druid.net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.

Hans Georg Schaathun

unread,
May 18, 2011, 3:36:37 AM5/18/11
to
On Mon, 16 May 2011 23:42:40 +0100, Rhodri James
<rho...@wildebst.demon.co.uk> wrote:
: ...which is, of course, not exactly secure either. A sufficiently
: determined hacker won't have much trouble disassembling a shared library
: even if you do strip out all the debug information. By chance I'm having
: to do something closely related to this at work just at the moment; it's
: hard, but far from impossible.

But then, nothing is secure in any absolute sense. The best you can
do with all your security efforts is to manage risk. Since obfuscation
increases the cost of mounting an attack, it also reduces risk,
and thereby provides some level of security.

Obviously, if your threat sources are dedicated hackers or maybe MI5,
there is no point bothering with obfuscation, but if your threat source
is script kiddies, then it might be quite effective.

--
:-- Hans Georg

Dotan Cohen

unread,
May 18, 2011, 10:42:36 AM5/18/11
to Hans Georg Schaathun, pytho...@python.org
On Wed, May 18, 2011 at 10:36, Hans Georg Schaathun <h...@schaathun.net> wrote:
> But then, nothing is secure in any absolute sense.  The best you can
> do with all your security efforts is to manage risk.  Since obfuscation
> increases the cost of mounting an attack, it also reduces risk,
> and thereby provides some level of security.
>
> Obviously, if your threat sources are dedicated hackers or maybe MI5,
> there is no point bothering with obfuscation, but if your threat source
> is script kiddies, then it might be quite effective.
>

The flip side is that the developer will not know about weaknesses
until much later in the development, when making changes to the
underlying code organization may be difficult or impossible. In this
early phase of development, he should actually encourage the script
kiddies to "report the bugs".

geremy condra

unread,
May 18, 2011, 12:54:30 PM5/18/11
to Hans Georg Schaathun, pytho...@python.org
On Wed, May 18, 2011 at 12:36 AM, Hans Georg Schaathun <h...@schaathun.net> wrote:
> On Mon, 16 May 2011 23:42:40 +0100, Rhodri James
>  <rho...@wildebst.demon.co.uk> wrote:
> :  ...which is, of course, not exactly secure either.  A sufficiently
> :  determined hacker won't have much trouble disassembling a shared library
> :  even if you do strip out all the debug information.  By chance I'm having
> :  to do something closely related to this at work just at the moment; it's
> :  hard, but far from impossible.
>
> But then, nothing is secure in any absolute sense.

If you're talking security and not philosophy, there is such a thing
as a secure system. As a developer you should aim for it.

> The best you can
> do with all your security efforts is to manage risk.  Since obfuscation
> increases the cost of mounting an attack, it also reduces risk,
> and thereby provides some level of security.

The on-the-ground reality is that it doesn't. Lack of access to the
source code has not kept windows or adobe acrobat or flash player
secure, and they have large full-time security teams, and as you might
imagine from the amount of malware floating around targeting those
systems there are a lot of people who have these skills in spades.

> Obviously, if your threat sources are dedicated hackers or maybe MI5,
> there is no point bothering with obfuscation, but if your threat source
> is script kiddies, then it might be quite effective.

On the theory that any attack model without an adversary is
automatically secure?

Geremy Condra

Chris Angelico

unread,
May 18, 2011, 1:24:12 PM5/18/11
to pytho...@python.org
On Thu, May 19, 2011 at 2:54 AM, geremy condra <deba...@gmail.com> wrote:
> On Wed, May 18, 2011 at 12:36 AM, Hans Georg Schaathun <h...@schaathun.net> wrote:
>> But then, nothing is secure in any absolute sense.
>
> If you're talking security and not philosophy, there is such a thing
> as a secure system. As a developer you should aim for it.

Agreed. Things can be secure if you accept caveats. A good server
might be secure as long as attackers cannot, say:
* Get physical access to the server, remove the hard disk, and tamper with it
* Hold a gun to the developer and say "Log me in as root or you die"
* Trigger a burst of cosmic rays that toggle some bits in memory

If someone can do that, there's really not much you can do to stop
them. But you CAN make a system 100% secure against network-based
attacks.

Denial of service attacks are the hardest to truly defend against, and
if your level of business is low enough, you can probably ignore them
in your code, and deal with them by human ("Hmm, we seem to be getting
ridiculous amounts of traffic from XX.YY.ZZ.*, I think I'll put a
temporary ban on that /24"). Although some really nasty DOSes can be
blocked fairly easily, so it's worth thinking about them.

But mainly: Don't panic about the really really obscure attack
possibilities, the ones that would only happen if someone with a lot
of resources is trying to bring you down. Just deal with the obvious
stuff - make sure your server cannot be compromised via a standard
network connection.

Test your server by connecting with a basic TELNET client (or a
hacked-up client, if it uses a binary protocol). Test your client by
connecting it to a hacked-up server. Make sure you can't muck up
either of them. Assume that any attacker will know every detail about
your comms protocol, because chances are he will know most of it.

Chris Angelico

Hans Georg Schaathun

unread,
May 18, 2011, 1:33:47 PM5/18/11
to
On Wed, 18 May 2011 09:54:30 -0700, geremy condra
<deba...@gmail.com> wrote:
: On Wed, May 18, 2011 at 12:36 AM, Hans Georg Schaathun <h...@schaathun.net> wrote:
: > But then, nothing is secure in any absolute sense.

:
: If you're talking security and not philosophy, there is such a thing
: as a secure system. As a developer you should aim for it.

You think so? Please name one, and let us know how you know that it
is secure.

: > and thereby provides some level of security.


:
: The on-the-ground reality is that it doesn't. Lack of access to the
: source code has not kept windows or adobe acrobat or flash player
: secure, and they have large full-time security teams, and as you might
: imagine from the amount of malware floating around targeting those
: systems there are a lot of people who have these skills in spades.

You are just demonstrating that it does not provide complete security,
something which I never argued against.

: > Obviously, if your threat sources are dedicated hackers or maybe MI5,


: > there is no point bothering with obfuscation, but if your threat source
: > is script kiddies, then it might be quite effective.
:
: On the theory that any attack model without an adversary is
: automatically secure?

No, on the assumption that we were discussing real systems, real
threats, and practical solutions, rather than models and theory.
There will always be adversaries, but they have limited means, and
limited interest in your system. And the limits vary. Any marginal
control will stave off a few potential attackers who just could not
be bothered.

In theory, you can of course talk about absolute security. For
instance, one can design something like AES¹, which is secure in
a very limited, theoretical model. However, to be of any practical
use, AES must be built into a system, interacting with other systems,
and the theory and skills to prove that such a system be secure simply
has not been developed.

Why do you think Common Criteria have not yet specified frameworks
for the top levels of assurance?

¹ Advanced Encryption Standard
--
:-- Hans Georg

John Bokma

unread,
May 18, 2011, 1:31:58 PM5/18/11
to
Chris Angelico <ros...@gmail.com> writes:

> On Thu, May 19, 2011 at 2:54 AM, geremy condra <deba...@gmail.com> wrote:
>> On Wed, May 18, 2011 at 12:36 AM, Hans Georg Schaathun <h...@schaathun.net> wrote:
>>> But then, nothing is secure in any absolute sense.
>>
>> If you're talking security and not philosophy, there is such a thing
>> as a secure system. As a developer you should aim for it.
>
> Agreed. Things can be secure if you accept caveats. A good server
> might be secure as long as attackers cannot, say:
> * Get physical access to the server, remove the hard disk, and tamper with it
> * Hold a gun to the developer and say "Log me in as root or you die"
> * Trigger a burst of cosmic rays that toggle some bits in memory

You forgot the most important one:

* if none of the software running on it has exploitable issues

Personally, I think it's best to understand that no server is ever
secure and hence one must always be prepared that a breach can happen.

--
John Bokma j3b

Blog: http://johnbokma.com/ Perl Consultancy: http://castleamber.com/
Perl for books: http://johnbokma.com/perl/help-in-exchange-for-books.html

geremy condra

unread,
May 18, 2011, 1:40:51 PM5/18/11
to Chris Angelico, pytho...@python.org
On Wed, May 18, 2011 at 10:24 AM, Chris Angelico <ros...@gmail.com> wrote:
> On Thu, May 19, 2011 at 2:54 AM, geremy condra <deba...@gmail.com> wrote:
>> On Wed, May 18, 2011 at 12:36 AM, Hans Georg Schaathun <h...@schaathun.net> wrote:
>>> But then, nothing is secure in any absolute sense.
>>
>> If you're talking security and not philosophy, there is such a thing
>> as a secure system. As a developer you should aim for it.
>
> Agreed. Things can be secure if you accept caveats. A good server
> might be secure as long as attackers cannot, say:
> * Get physical access to the server, remove the hard disk, and tamper with it
> * Hold a gun to the developer and say "Log me in as root or you die"
> * Trigger a burst of cosmic rays that toggle some bits in memory

Just a note: you can do many cool things to prevent the last from
working, assuming you're talking about RSA fault injection attacks.

> If someone can do that, there's really not much you can do to stop
> them. But you CAN make a system 100% secure against network-based
> attacks.
>
> Denial of service attacks are the hardest to truly defend against, and
> if your level of business is low enough, you can probably ignore them
> in your code, and deal with them by human ("Hmm, we seem to be getting
> ridiculous amounts of traffic from XX.YY.ZZ.*, I think I'll put a
> temporary ban on that /24"). Although some really nasty DOSes can be
> blocked fairly easily, so it's worth thinking about them.
>
> But mainly: Don't panic about the really really obscure attack
> possibilities, the ones that would only happen if someone with a lot
> of resources is trying to bring you down. Just deal with the obvious
> stuff - make sure your server cannot be compromised via a standard
> network connection.

Just one caveat I would add to this: make sure you're drawing this
line at the correct place. If your attack model is wrong things have a
tendency to drop from 'impossible' to 'laughably easy' in a hurry.

> Test your server by connecting with a basic TELNET client (or a
> hacked-up client, if it uses a binary protocol). Test your client by
> connecting it to a hacked-up server. Make sure you can't muck up
> either of them. Assume that any attacker will know every detail about
> your comms protocol, because chances are he will know most of it.

I actually like to use scapy a lot. It's a little slow, but you can
really get down deep and still feel sort of sane afterwards, and it
makes it easier on you if you don't need to go all the way to the
metal.

Geremy Condra

Chris Angelico

unread,
May 18, 2011, 1:52:16 PM5/18/11
to pytho...@python.org
On Thu, May 19, 2011 at 3:31 AM, John Bokma <jo...@castleamber.com> wrote:
>> Agreed. Things can be secure if you accept caveats. A good server
>> might be secure as long as attackers cannot, say:
>> * Get physical access to the server, remove the hard disk, and tamper with it
>> * Hold a gun to the developer and say "Log me in as root or you die"
>> * Trigger a burst of cosmic rays that toggle some bits in memory
>
> You forgot the most important one:
>
> * if none of the software running on it has exploitable issues

That's not a caveat. That's a purposeful and deliberate goal. And far
from impossible.

> Personally, I think it's best to understand that no server is ever
> secure and hence one must always be prepared that a breach can happen.

You need to balance the risk of a breach against the effort it'd take
to prevent. See my comments re DOS attacks; it's not generally worth
being preemptive with those, unless you're at a way higher transaction
level than this discussion is about (for those who came in late, it's
a basic network game, and not Google Docs or the DNS root servers or
something). If it's going to impose 500ms latency on all packets just
to prevent the one chance in 1E50 that you get some particular attack,
then it's really not worthwhile. However, it IS possible to ensure
that the server doesn't, for instance, trust the client; those
extremely basic protections are well worth the effort (even if it
seems like a lot of effort).

Chris Angelico

Chris Angelico

unread,
May 18, 2011, 2:07:25 PM5/18/11
to pytho...@python.org
On Thu, May 19, 2011 at 3:40 AM, geremy condra <deba...@gmail.com> wrote:
> Just a note: you can do many cool things to prevent the last from
> working, assuming you're talking about RSA fault injection attacks.

Sure. Each of those caveats can be modified in various ways; keeping
checksums of everything in memory, encrypting stored data with
something that isn't stored on that computer, etc, etc, etc. But in
terms of effort for gain, it's not usually worth it. However, it is a
good idea to be aware of your caveats; for instance, are you aware
that most Linux systems will allow a root login from another file
system (eg a live-boot CD) to access the hard drive read-write,
regardless of file ownership and passwords? (My boss wasn't, and was
rather surprised at how easily it could be done.)

>> But mainly: Don't panic about the really really obscure attack

>> possibilities...


>
> Just one caveat I would add to this: make sure you're drawing this
> line at the correct place. If your attack model is wrong things have a
> tendency to drop from 'impossible' to 'laughably easy' in a hurry.

Absolutely. Sometimes it's worth scribbling comments in your code like:
/* TODO: If someone tries X, it might cause Y. Could rate-limit here
if that's an issue. */
Then, you keep an administrative eye on the production code. If you
start having problems, you can deal with them fast, rather than having
the ridiculous situation of security issues lingering for months or
years before finally getting a band-aid solution.

>> Test your server by connecting with a basic TELNET client...


>
> I actually like to use scapy a lot. It's a little slow, but you can
> really get down deep and still feel sort of sane afterwards, and it
> makes it easier on you if you don't need to go all the way to the
> metal.

Sort of sane? I lost that feeling years ago. :) When I'm working on
Windows, I'll sometimes use SMSniff for packet sniffing, but
generally, I just stick with high level socket services and depend on
the underlying libraries to deal with malformed packets and such. On
Linux, I generally whip up a quick script to do whatever job on the
spot (Python and Pike are both extremely well suited to this), but on
Windows, I use my MUD client, RosMud, which has a "passive mode"
option for playing the part of the server.

Chris Angelico

Littlefield, Tyler

unread,
May 18, 2011, 2:26:45 PM5/18/11
to pytho...@python.org
>might be secure as long as attackers cannot, say:
You forgot UFOs.
Anyway, again, thanks to everyone for the advice, this is good reading.
Incidentally, I don't know to much about security. I know about rate
limiting and dos attacks, as well as some others, but I think there's a
lot more that I don't know--can someone kind of aim me in the right
direction for some of this? I want to be able to take techniques, break
my server and then fix it so that can't be done before I head to public
with this.

Dotan Cohen

unread,
May 18, 2011, 2:30:00 PM5/18/11
to Chris Angelico, pytho...@python.org
On Wed, May 18, 2011 at 20:24, Chris Angelico <ros...@gmail.com> wrote:
> But you CAN make a system 100% secure against network-based
> attacks.
>

Only by unplugging the network cable. This is called an air gap, and
is common in military installations. Anything with a cable plugged in
is hackable.

Dotan Cohen

unread,
May 18, 2011, 2:31:48 PM5/18/11
to Chris Angelico, pytho...@python.org
On Wed, May 18, 2011 at 20:24, Chris Angelico <ros...@gmail.com> wrote:
> Denial of service attacks are the hardest to truly defend against, and
> if your level of business is low enough, you can probably ignore them
> in your code, and deal with them by human ("Hmm, we seem to be getting
> ridiculous amounts of traffic from XX.YY.ZZ.*, I think I'll put a
> temporary ban on that /24"). Although some really nasty DOSes can be
> blocked fairly easily, so it's worth thinking about them.
>

The python code should not be concerned with DDoS, that is what
iptables is for. Remember, never do in code what Linux will do for
you.

Chris Angelico

unread,
May 18, 2011, 2:37:30 PM5/18/11
to pytho...@python.org
On Thu, May 19, 2011 at 4:31 AM, Dotan Cohen <dotan...@gmail.com> wrote:
> The python code should not be concerned with DDoS, that is what
> iptables is for. Remember, never do in code what Linux will do for
> you.

In general, yes. Denial of service is a fairly broad term, though, and
if there's a computationally-expensive request that a client can send,
then it may be worth rate-limiting it. Or if there's a request that
causes your server to send out inordinate amounts of data, and you're
running it on a typical home internet connection, then that's a DOS
vector too. So it's not only an iptables issue.

But yes. The "system" is the entire system, not just the Python code
you're writing.

ChrisA

Chris Angelico

unread,
May 18, 2011, 2:49:30 PM5/18/11
to pytho...@python.org

Your last sentence IS the right direction. The two easiest ways to
find out if your system is secure are (1) try to break it, and (2)
pore over the code and see what can be broken.

When you start testing things, try doing things in the wrong order.
Your server should either cope with it fine, or throw back an error to
that client, but should never allow any action that that client hasn't
already proven he's allowed to do.

There's plenty of people here who know what they're talking about when
it comes to security (just skim over this thread for a few good
names!), so if you have specific questions regarding your Python code,
do ask. Alternatively, if it's not particularly Python-related, I
would be happy for you to email me privately; I'm a gamer, and run an
online game, so I'd be quite willing to have a bit of a poke at your
code.

Chris Angelico

geremy condra

unread,
May 18, 2011, 3:07:49 PM5/18/11
to Hans Georg Schaathun, pytho...@python.org
On Wed, May 18, 2011 at 10:33 AM, Hans Georg Schaathun <h...@schaathun.net> wrote:
> On Wed, 18 May 2011 09:54:30 -0700, geremy condra
>  <deba...@gmail.com> wrote:
> :  On Wed, May 18, 2011 at 12:36 AM, Hans Georg Schaathun <h...@schaathun.net> wrote:
> : > But then, nothing is secure in any absolute sense.
> :
> :  If you're talking security and not philosophy, there is such a thing
> :  as a secure system. As a developer you should aim for it.
>
> You think so?  Please name one, and let us know how you know that it
> is secure.

I was playing around with an HSM the other day that had originally
targeted FIPS 140-3 level 5, complete with formal verification models
and active side-channel countermeasures. I'm quite confident that it
was secure in nearly any practical sense.

> : > and thereby provides some level of security.
> :
> :  The on-the-ground reality is that it doesn't. Lack of access to the
> :  source code has not kept windows or adobe acrobat or flash player
> :  secure, and they have large full-time security teams, and as you might
> :  imagine from the amount of malware floating around targeting those
> :  systems there are a lot of people who have these skills in spades.
>
> You are just demonstrating that it does not provide complete security,
> something which I never argued against.

Ah, my mistake- when you said 'some level of security' I read that as
'some meaningful level of security'. If you were arguing that it
provided roughly as much protection to your code as the curtain of air
surrounding you does to your body, then yes- you're correct.

> : > Obviously, if your threat sources are dedicated hackers or maybe MI5,
> : > there is no point bothering with obfuscation, but if your threat source
> : > is script kiddies, then it might be quite effective.
> :
> :  On the theory that any attack model without an adversary is
> :  automatically secure?
>
> No, on the assumption that we were discussing real systems, real
> threats, and practical solutions, rather than models and theory.
> There will always be adversaries, but they have limited means, and
> limited interest in your system.  And the limits vary.  Any marginal
> control will stave off a few potential attackers who just could not
> be bothered.

Empirically this doesn't appear to be a successful gambit, and from an
attacker's point of view it's pretty easy to see why. When a system
I'm trying to break turns out to have done something stupid like this,
it really just ticks me off, and I know a lot of actual attackers who
think the same way.

> In theory, you can of course talk about absolute security.  For
> instance, one can design something like AES¹, which is secure in
> a very limited, theoretical model.  However, to be of any practical
> use, AES must be built into a system, interacting with other systems,
> and the theory and skills to prove that such a system be secure simply
> has not been developed.

This is flatly incorrect.

> Why do you think Common Criteria have not yet specified frameworks
> for the top levels of assurance?

Perhaps because the lower levels of 'assurance' don't seem to provide very much.

Geremy Condra

Hans Georg Schaathun

unread,
May 18, 2011, 3:56:17 PM5/18/11
to
On Wed, 18 May 2011 12:07:49 -0700, geremy condra
<deba...@gmail.com> wrote:
: I was playing around with an HSM the other day that had originally

: targeted FIPS 140-3 level 5, complete with formal verification models
: and active side-channel countermeasures. I'm quite confident that it
: was secure in nearly any practical sense.

And you ostensibly use the word /nearly/ rather than «absolutely».
It seems that we agree.

BTW, according to the sources I can find quickly, FIPS 140-3
targets /modules/ and not systems.

: Ah, my mistake- when you said 'some level of security' I read that as


: 'some meaningful level of security'. If you were arguing that it
: provided roughly as much protection to your code as the curtain of air
: surrounding you does to your body, then yes- you're correct.

Well, I didn't. Whether it is meaningful is relative and dependent
on the context, but it sure isn't meaningful if any values at stake are.

: Empirically this doesn't appear to be a successful gambit, and from an


: attacker's point of view it's pretty easy to see why. When a system
: I'm trying to break turns out to have done something stupid like this,
: it really just ticks me off, and I know a lot of actual attackers who
: think the same way.

That is very true. It is a very crude measure with a marginal
effect on risk. Going out of one's way to try to obfuscate the
code as machine code, as was the starting point of the discussion,
is surely not a good strategy, as one is then spending significant
time to achieve a rather insignificant.

My main concern is that the use of absolutes, «you need this», and
«that is silly», is drawing attention from the main point. Rather,
get to know your risks and focus on the greater ones. Consider
possible controls, and choose cheap and effective ones. Even a
marginally effective control may be worth-while if the cost is even
less. We all seem to agree on the main point; many have argued the
same way.

As an aside, OTOH, don't you think MAYFARE would have been broken
earlier if the source code were open? It was around for ages before
it was.

: > In theory, you can of course talk about absolute security.  For


: > instance, one can design something like AES¹, which is secure in
: > a very limited, theoretical model.  However, to be of any practical
: > use, AES must be built into a system, interacting with other systems,
: > and the theory and skills to prove that such a system be secure simply
: > has not been developed.
:
: This is flatly incorrect.

Which part of it? If you claim that the theory and skills to prove it
exist, could you give a reference please?

Of course, if you are only thinking of «nearly any practical sense»
again, then we agree and always did.

: > Why do you think Common Criteria have not yet specified frameworks


: > for the top levels of assurance?
:
: Perhaps because the lower levels of 'assurance' don't seem to provide very much.

If the lower levels do not, would that not be an argument to implement
more levels? Too many governments have put too much resources into
this to just throw it away if the methodology to achieve higher assurance
could be codified.

Or maybe it is right to say that the theory and skills do exist, but the
money to gather it all in one project to demonstrate the security of
a single system does not :-)

--
:-- Hans Georg

geremy condra

unread,
May 18, 2011, 5:34:46 PM5/18/11
to Hans Georg Schaathun, pytho...@python.org
On Wed, May 18, 2011 at 12:56 PM, Hans Georg Schaathun <h...@schaathun.net> wrote:
> On Wed, 18 May 2011 12:07:49 -0700, geremy condra
>  <deba...@gmail.com> wrote:
> :  I was playing around with an HSM the other day that had originally
> :  targeted FIPS 140-3 level 5, complete with formal verification models
> :  and active side-channel countermeasures. I'm quite confident that it
> :  was secure in nearly any practical sense.
>
> And you ostensibly use the word /nearly/ rather than «absolutely».
> It seems that we agree.

Systems can be designed that are absolutely secure under reasonable
assumptions. The fact that it has assumptions does not make your
statement true.

> BTW, according to the sources I can find quickly, FIPS 140-3
> targets /modules/ and not systems.

I can't tell if you're trying to play word games with the distinction
between "system" and "module" or if you're just saying that you aren't
sure what FIPS actually certifies. Could you please clarify?

> :  Ah, my mistake- when you said 'some level of security' I read that as
> :  'some meaningful level of security'. If you were arguing that it
> :  provided roughly as much protection to your code as the curtain of air
> :  surrounding you does to your body, then yes- you're correct.
>
> Well, I didn't.  Whether it is meaningful is relative and dependent
> on the context, but it sure isn't meaningful if any values at stake are.

Again, I'm unsure what you're going for here. It sounds like you're
saying that obfuscation doesn't provide meaningful security, which is
my point.

> :  Empirically this doesn't appear to be a successful gambit, and from an
> :  attacker's point of view it's pretty easy to see why. When a system
> :  I'm trying to break turns out to have done something stupid like this,
> :  it really just ticks me off, and I know a lot of actual attackers who
> :  think the same way.
>
> That is very true.  It is a very crude measure with a marginal
> effect on risk.  Going out of one's way to try to obfuscate the
> code as machine code, as was the starting point of the discussion,
> is surely not a good strategy, as one is then spending significant
> time to achieve a rather insignificant.
>
> My main concern is that the use of absolutes, «you need this», and
> «that is silly», is drawing attention from the main point.  Rather,
> get to know your risks and focus on the greater ones.  Consider
> possible controls, and choose cheap and effective ones.  Even a
> marginally effective control may be worth-while if the cost is even
> less.  We all seem to agree on the main point; many have argued the
> same way.
>
> As an aside, OTOH, don't you think MAYFARE would have been broken
> earlier if the source code were open?  It was around for ages before
> it was.

Are you talking about the Mayfair classical cipher here?

> : > In theory, you can of course talk about absolute security.  For
> : > instance, one can design something like AES¹, which is secure in
> : > a very limited, theoretical model.  However, to be of any practical
> : > use, AES must be built into a system, interacting with other systems,
> : > and the theory and skills to prove that such a system be secure simply
> : > has not been developed.
> :
> :  This is flatly incorrect.
>
> Which part of it?  If you claim that the theory and skills to prove it
> exist, could you give a reference please?

The entire field of formal modeling and verification has grown around
solving this problem. My new favorite in the field is "formal models
and techniques for analyzing security protocols", but there are other
works discussing OS kernel verification (which has gotten a lot of
attention lately) and tons of academic literature. Google (scholar) is
the place to go.

> Of course, if you are only thinking of «nearly any practical sense»
> again, then we agree and always did.

Nope, talking about formal methods.

> : > Why do you think Common Criteria have not yet specified frameworks
> : > for the top levels of assurance?
> :
> :  Perhaps because the lower levels of 'assurance' don't seem to provide very much.
>
> If the lower levels do not, would that not be an argument to implement
> more levels?  Too many governments have put too much resources into
> this to just throw it away if the methodology to achieve higher assurance
> could be codified.

If you can't say with confidence that something meets minimum security
standards, the answer is not to try to say it meets high security
standards.

> Or maybe it is right to say that the theory and skills do exist, but the
> money to gather it all in one project to demonstrate the security of
> a single system does not :-)

Sorry, but again this is not correct.

Geremy Condra

geremy condra

unread,
May 18, 2011, 5:47:52 PM5/18/11
to ty...@tysdomain.com, pytho...@python.org
On Wed, May 18, 2011 at 11:26 AM, Littlefield, Tyler

<ty...@tysdomain.com> wrote:
>>might be secure as long as attackers cannot, say:
> You forgot UFOs.
> Anyway, again, thanks to everyone for the advice, this is good reading.
> Incidentally, I don't know to much about security. I know about rate
> limiting and dos attacks, as well as some others, but I think there's a lot
> more that I don't know--can someone kind of aim me in the right direction
> for some of this? I want to be able to take techniques, break my server and
> then fix it so that can't be done before I head to public with this.

One good thing to do is to just read some of the black hat papers.
They're pretty accessible and even if you don't know everything
they're saying you should be able to get a general feel for things
that way. You might also try working through things like Damn
Vulnerable Web App, if you have the time.

Geremy Condra

harrismh777

unread,
May 18, 2011, 10:54:48 PM5/18/11
to
Littlefield, Tyler wrote:
> I know about rate limiting and dos attacks, as well as some others, but
> I think there's a lot more that I don't know--can someone kind of aim me
> in the right direction for some of this? I want to be able to take
> techniques, break my server and then fix it so that can't be done before
> I head to public with this.

Black-hat and gray-hat papers are some of the best resources; and
entertaining ta-boot...

Four resources that you will what to look into, in no particular order:

Erickson, Jon, "Hacking: The Art of Exploitation," 2nd ed,
San Francisco: No Starch Press, 2008.


Anonymous, "Maximum Linux Security: A Hacker's Guide to Protecting
Your Linux Server and Workstation," Indianapolis:
Sams Publishing, 2000.

(check for other editions)
(this volume is a good read, even for other platforms,
but is geared specifically to Linux)


Graves, Kimberly, "CEH Certified Ethical Hacker: Study Guide,"
Indianapolis: Wiley Publishing, 2010.


Seitz, Justin, "Gray Hat Python: Python Programming for Hackers
and Reverse Engineers," San Francisco: No Starch Press, 2009.


The best way to protect your system is first to be able to
understand how someone else will attempt to compromise it.

I personally am an *ethical* hacker; by definition, I exploit
possibilities, for problem solving, and I cause *NO* harm. Having said
that, I have studied *all* of the techniques employed in the field for
causing harm; why? Because that is the *only* way to know how to defend
against them.

Its like missile anti missile... virus anti virus, and the
like. Because *all* of software is mathematical by nature it is not
possible to lock software with software... this is partially the
decidability problem at work. But mostly its a matter of their skills
getting better... yours better be better yet, and when they get even
better than you--- well you better be ready to improve ... and on and
on it goes... But, first you need to understand what you're up against.

There is absolutely *no* way to prevent reverse engineering. Its
all just code, and that code can be unraveled with the right math and
enough time. (time and talent is all it takes; that and the will to be
tenacious and uncompromising. If someone wants your system badly enough,
they will own it... its just a matter of time... so be ready for it...
like the Bible says, "If the master of the house knew what hour the
thief would break in and steal, he would have kept better watch on his
house!"

kind regards,
m harris

Hans Georg Schaathun

unread,
May 19, 2011, 1:21:08 AM5/19/11
to
On Wed, 18 May 2011 14:34:46 -0700, geremy condra
<deba...@gmail.com> wrote:
: Systems can be designed that are absolutely secure under reasonable

: assumptions. The fact that it has assumptions does not make your
: statement true.
: (...)
: I can't tell if you're trying to play word games with the distinction

: between "system" and "module" or if you're just saying that you aren't
: sure what FIPS actually certifies. Could you please clarify?

The distinction between system and module is rather significant.
If you only consider modules, you have bounded your problem and
drastically limited the complexity.

: Again, I'm unsure what you're going for here. It sounds like you're


: saying that obfuscation doesn't provide meaningful security, which is
: my point.

Meaningful is a relative term, and it is hard to rule out the
possibility that meaning can be found in some case. Overall, we
agree though.

: Are you talking about the Mayfair classical cipher here?

I am talking about the system used in public transport cards like
Oyster and Octopus. I am not sure how classical it is, or whether
mayfair/mayfare referred to the system or just a cipher. Any way,
it was broken, and it took years.

: The entire field of formal modeling and verification has grown around


: solving this problem. My new favorite in the field is "formal models
: and techniques for analyzing security protocols", but there are other
: works discussing OS kernel verification (which has gotten a lot of
: attention lately) and tons of academic literature. Google (scholar) is
: the place to go.

Sure, but now you are considering modules, rather than systems again.
It is when these reliable components are put together to form systems
that people fail (empirically).

: If you can't say with confidence that something meets minimum security


: standards, the answer is not to try to say it meets high security
: standards.

So what? The levels of assurance have nothing to do with standards.
The levels of assurance refer to the /confidence/ you can have that
the standards are met.

: > Or maybe it is right to say that the theory and skills do exist, but the


: > money to gather it all in one project to demonstrate the security of
: > a single system does not :-)
:
: Sorry, but again this is not correct.

You keep saying that, but whenever you try to back the claim, you
keep referring to limited components and not systems at all.

--
:-- Hans Georg

Steven D'Aprano

unread,
May 19, 2011, 4:47:28 AM5/19/11
to
On Thu, 19 May 2011 06:21:08 +0100, Hans Georg Schaathun wrote:

> : Are you talking about the Mayfair classical cipher here?
>
> I am talking about the system used in public transport cards like Oyster
> and Octopus. I am not sure how classical it is, or whether
> mayfair/mayfare referred to the system or just a cipher.


I think Geremy is talking about the Playfair cipher:

http://en.wikipedia.org/wiki/Playfair_cipher


> Any way, it was broken, and it took years.

You don't know that. All you know is that it took years for people to
realise that it had been broken, when a security researcher publicly
announced the MIFARE cipher had been broken. If criminals had broken the
cipher, they would have had no incentive to publicize the fact, and the
companies running Oyster and similar ticketing schemes would have no
incentive to admit they were broken. Far from it: all the incentives are
against disclosure.

So it's possible that Oyster cards have been counterfeited for years
without anyone but the counterfitters, and possibly the Oyster card
people themselves, knowing.

The real barrier to cracking Oyster cards is not that the source code is
unavailable, but that the intersection of the set of those who know how
to break encryption, and the set of those who want to break Oyster cards,
is relatively small. I don't know how long it took to break the encryption,
but I'd guess that it was probably a few days of effort by somebody
skilled in the art.

http://www.usenix.org/events/sec08/tech/full_papers/nohl/nohl_html/index.html


--
Steven

Hans Georg Schaathun

unread,
May 19, 2011, 5:16:54 AM5/19/11
to
On 19 May 2011 08:47:28 GMT, Steven D'Aprano
<steve+comp....@pearwood.info> wrote:
: The real barrier to cracking Oyster cards is not that the source code is
: unavailable, but that the intersection of the set of those who know how
: to break encryption, and the set of those who want to break Oyster cards,
: is relatively small. I don't know how long it took to break the encryption,
: but I'd guess that it was probably a few days of effort by somebody
: skilled in the art.
:
: http://www.usenix.org/events/sec08/tech/full_papers/nohl/nohl_html/index.html

In that paper, more than one art seem to have been applied. An open
design would have eliminated the need for image analysis and reduced
the requirement on hardware/electronics skills. Hence, the obfuscation
has made that intersection you talk about smaller, and increased the
cost of mounting the attack. As the system was broken anyway, it is
hardly a victory for obfuscation, but that's beside the point.

The work of that paper is almost certainly more than just «a few
days of effort». There are simply to many technical issues to tackle,
and they must be tackled one by one. The cost of mounting the attack
is to figure out what it takes to do it, before spend the resources
barking up the wrong tree. For each successful attack, there probably
is a number of failed ones.

Thanks for the reference.

BTW. That's not the only attack on MIFARE. I cannot remember the
details of the other.

--
:-- Hans Georg

geremy condra

unread,
May 19, 2011, 1:23:47 PM5/19/11
to Hans Georg Schaathun, pytho...@python.org
On Wed, May 18, 2011 at 10:21 PM, Hans Georg Schaathun <h...@schaathun.net> wrote:
> On Wed, 18 May 2011 14:34:46 -0700, geremy condra
>  <deba...@gmail.com> wrote:
> :  Systems can be designed that are absolutely secure under reasonable
> :  assumptions. The fact that it has assumptions does not make your
> :  statement true.
> : (...)
> :  I can't tell if you're trying to play word games with the distinction
> :  between "system" and "module" or if you're just saying that you aren't
> :  sure what FIPS actually certifies. Could you please clarify?
>
> The distinction between system and module is rather significant.
> If you only consider modules, you have bounded your problem and
> drastically limited the complexity.

Ah, the 'word games' option. I'm not going to spend a lot of time
arguing this one: HSMs are clearly the domain of systems research, are
referred to in both technical and nontechnical documents as 'keystone
systems', and the FIPS standard under which they are certified
specifically calls them systems more times than I care to count. They
are, to the people who make and use them, systems, and your attempt at
redefinition won't change that.

> :  Are you talking about the Mayfair classical cipher here?
>
> I am talking about the system used in public transport cards like
> Oyster and Octopus.  I am not sure how classical it is, or whether
> mayfair/mayfare referred to the system or just a cipher.  Any way,
> it was broken, and it took years.

Ah, MIFARE. That's a different story, and no, I don't believe they
would have been broken sooner if the specs were released. The
importance (and difficulty) of securing devices like smartcards wasn't
really recognized until much later, and certainly people with a foot
in both worlds were very rare for a long time. Also remember that DES
(with its 56-bit keys) was recertified just a few months before MIFARE
(with its 48-bit keys) was first released- it was a different world.

> :  The entire field of formal modeling and verification has grown around
> :  solving this problem. My new favorite in the field is "formal models
> :  and techniques for analyzing security protocols", but there are other
> :  works discussing OS kernel verification (which has gotten a lot of
> :  attention lately) and tons of academic literature. Google (scholar) is
> :  the place to go.
>
> Sure, but now you are considering modules, rather than systems again.
> It is when these reliable components are put together to form systems
> that people fail (empirically).

Let me get this straight: your argument is that operating *systems*
aren't systems?

> :  If you can't say with confidence that something meets minimum security
> :  standards, the answer is not to try to say it meets high security
> :  standards.
>
> So what?  The levels of assurance have nothing to do with standards.
> The levels of assurance refer to the /confidence/ you can have that
> the standards are met.

The increasing levels of assurance don't just signify that you've
checked for problems- it certifies that you don't have them, at least
insofar as that level of testing is able to find. Insisting that this
doesn't, or shouldn't, translate into tighter security doesn't make
much sense.

Geremy Condra

geremy condra

unread,
May 19, 2011, 1:50:55 PM5/19/11
to harrismh777, pytho...@python.org
On Wed, May 18, 2011 at 7:54 PM, harrismh777 <harri...@charter.net> wrote:
> Littlefield, Tyler wrote:

<snip>

> Four resources that you will what to look into, in no particular order:
>
> Erickson, Jon, "Hacking: The Art of Exploitation," 2nd ed,
>        San Francisco: No Starch Press, 2008.

This would be a very good choice. It's a bit light on details, but
makes up for it by being exceptionally well-written and very
accessible.

> Anonymous, "Maximum Linux Security: A Hacker's Guide to Protecting
>        Your Linux Server and Workstation," Indianapolis:
>        Sams Publishing, 2000.
>
>        (check for other editions)
>        (this volume is a good read, even for other platforms,
>                but is geared specifically to Linux)

This is a good volume, but very dated. I'd probably pass on it.

> Graves, Kimberly, "CEH Certified Ethical Hacker: Study Guide,"
>        Indianapolis: Wiley Publishing, 2010.

Briefly glancing over the TOC, this actually looks surprisingly good.
CEH itself is a joke among black hats, but if this gets down to the
nitty-gritty of actually performing the attacks it covers it sounds
like a buy.

> Seitz, Justin, "Gray Hat Python: Python Programming for Hackers
>        and Reverse Engineers," San Francisco: No Starch Press, 2009.

I'd skip this one, as it isn't really focused on what you want. The
web application hacker's handbook is probably more along the lines of
what you need, if you're going for a book. There's also an older
volume called 'counter hack' that gives a good overview of some of the
ways that attacks proceed.

Another recommend I'm surprised hasn't popped up already: 'security
power tools' is a good way to get your foot in the door. It has a
practical, no-nonsense approach and is split into self-contained
chapters so you don't waste too much of your time on tools that aren't
relevant to you.

Geremy Condra

Hans Georg Schaathun

unread,
May 19, 2011, 2:23:28 PM5/19/11
to
On Thu, 19 May 2011 10:23:47 -0700, geremy condra
<deba...@gmail.com> wrote:
: Let me get this straight: your argument is that operating *systems*
: aren't systems?

You referred to the kernel and not the system. The complexities of
the two are hardly comparable.

There probably are different uses of system; in computer security
literature¹ it often refers, not only to a product (hardware/software)
an actual installation and configuration of that product in a specific
context. /I/ did not redefine it.

Speaking of reasonable assumptions, one necessary assumption which is
particularly dodgy is that whoever deploys and configures it
understands all the assumptions and do not break them through ignorance.

Is your concern with security purely from a developer's viewpoint,
so that you don't have to worry about the context in which it will
be deployed?

: > So what?  The levels of assurance have nothing to do with standards.


: > The levels of assurance refer to the /confidence/ you can have that
: > the standards are met.
:
: The increasing levels of assurance don't just signify that you've
: checked for problems- it certifies that you don't have them, at least
: insofar as that level of testing is able to find. Insisting that this
: doesn't, or shouldn't, translate into tighter security doesn't make
: much sense.

Tighter sure, but the security requirements and the requirement on
testing and/or validation are orthogonal scales. The higher levels
of assurance are based on formal methods while the lower ones are based
primarily on testing.

I read your initial comment to imply that if you cannot get satisfactory
assurance using the lower levels, you won't get any at the higher
levels. That does not make any sense. Obviously, if you were implying
that no system passes the lower levels, then of course they won't pass
the higher levels, but then, if that's the case, we would all know that
we cannot even design /seemingly/ secure systems. And nobody has
suggested that so far.


¹ e.g. Dieter Gollmann for just one ref off the top of my head.
--
:-- Hans Georg

geremy condra

unread,
May 19, 2011, 8:56:12 PM5/19/11
to Hans Georg Schaathun, pytho...@python.org
On Thu, May 19, 2011 at 11:23 AM, Hans Georg Schaathun <h...@schaathun.net> wrote:
> On Thu, 19 May 2011 10:23:47 -0700, geremy condra
>  <deba...@gmail.com> wrote:
> :  Let me get this straight: your argument is that operating *systems*
> :  aren't systems?
>
> You referred to the kernel and not the system.  The complexities of
> the two are hardly comparable.

I don't know about that. Among the many verified microkernels, at
least two projects have formally verified both their kernel and their
toolchain, and one of them claims they've verified everything in their
TCB and are headed towards verified POSIX compliance in 2012. That
would seem to be a fairly large system (and definitely a complete OS)
to me. Another (seL4) says they've formally verified security of a
complete system that includes a userspace and the ability to run other
OSes in fully isolated containers, which also seems to be quite
complete. Finally, there's one from Microsoft research that claims
similar properties but which apparently isn't interested in
compatibility, which I'm not sure how to interpret in terms of
usefulness and size. In any event, higher level systems- like
electronic voting mechanisms and automotive sensor networks- have also
been verified, which seems to run counter to your original point.

Also, not sure if it's open to the general public but if you're
interested in this kind of thing and live near seattle, I think
there's actually going to be a talk on verifying a POSIX userspace
implementation here tomorrow.

TL;DR version: large systems have indeed been verified for their
security properties.

> There probably are different uses of system; in computer security
> literature¹ it often refers, not only to a product (hardware/software)
> an actual installation and configuration of that product in a specific
> context.  /I/ did not redefine it.

You chose a word with a many meanings, used it to make a very broad
statement which is only a little bit true, and then pretended that you
had the One True Definition in your pocket. I don't think that's
legitimate, but whatever; let's just say that we meant different
things by the word and drop it.

> Speaking of reasonable assumptions, one necessary assumption which is
> particularly dodgy is that whoever deploys and configures it
> understands all the assumptions and do not break them through ignorance.

Yup. Nothing is safe from idiots.

> Is your concern with security purely from a developer's viewpoint,
> so that you don't have to worry about the context in which it will
> be deployed?

My viewpoint is that of an attacker, since that's more or less my job.

> I read your initial comment to imply that if you cannot get satisfactory
> assurance using the lower levels, you won't get any at the higher
> levels.  That does not make any sense.

Well, this is kind of like my point. My point was that you really
don't get anything at the lower levels, and that they should fix that
(which is far more useful to a normal consumer) rather than trying to
talk about formal verification and similar tools, which are only going
to be used on a tiny fraction of products.

Geremy Condra

Chris Angelico

unread,
May 19, 2011, 9:33:25 PM5/19/11
to pytho...@python.org
On Fri, May 20, 2011 at 10:56 AM, geremy condra <deba...@gmail.com> wrote:
>> Speaking of reasonable assumptions, one necessary assumption which is
>> particularly dodgy is that whoever deploys and configures it
>> understands all the assumptions and do not break them through ignorance.
>
> Yup. Nothing is safe from idiots.
>

Which means that the assumption really is that you are evaluating a
system, not a bald piece of code. I don't consider that an assumption.
When you're writing code that you will yourself deploy, you take full
responsibility; when you let other people deploy it, they have to take
ultimate responsibility (although they will legitimately expect you to
provide an install script and/or instructions).

There are idiots in this world.
Have you met them?
Met them? I listen to you every week!
-- The Goon Show, and so absolutely true

Chris Angelico

geremy condra

unread,
May 19, 2011, 10:30:52 PM5/19/11
to Chris Angelico, pytho...@python.org
On Thu, May 19, 2011 at 6:33 PM, Chris Angelico <ros...@gmail.com> wrote:
> On Fri, May 20, 2011 at 10:56 AM, geremy condra <deba...@gmail.com> wrote:
>>> Speaking of reasonable assumptions, one necessary assumption which is
>>> particularly dodgy is that whoever deploys and configures it
>>> understands all the assumptions and do not break them through ignorance.
>>
>> Yup. Nothing is safe from idiots.

I actually think I need to take this statement back. The more I think
about it, the less convinced I am that it's correct- I can at least
conceive of violable systems which cannot be misconfigured. So, sorry
about that.

> Which means that the assumption really is that you are evaluating a
> system, not a bald piece of code. I don't consider that an assumption.
> When you're writing code that you will yourself deploy, you take full
> responsibility; when you let other people deploy it, they have to take
> ultimate responsibility (although they will legitimately expect you to
> provide an install script and/or instructions).

Sure, although I would personally still call it an assumption.

Geremy Condra

Chris Angelico

unread,
May 19, 2011, 10:35:17 PM5/19/11
to pytho...@python.org
On Fri, May 20, 2011 at 12:30 PM, geremy condra <deba...@gmail.com> wrote:
>> On Fri, May 20, 2011 at 10:56 AM, geremy condra <deba...@gmail.com> wrote:
>>> Yup. Nothing is safe from idiots.
>
> I actually think I need to take this statement back. The more I think
> about it, the less convinced I am that it's correct- I can at least
> conceive of violable systems which cannot be misconfigured. So, sorry
> about that.

If it is, then you're not deploying it, you're just pushing buttons
and acting like a user. I still stand by the view that the one with
the root password is the one responsible for the computer's security;
and if you have the root filesystem password, there's no way that
something can be made unmisconfigurable. (You CAN, however, make
something that's out-of-the-box secure, so someone just does a 'sudo
apt-get install yoursystem' and it's specced up nicely. This is a Good
Thing.)

Chris Angelico

Hans Georg Schaathun

unread,
May 20, 2011, 12:48:50 AM5/20/11
to
On Thu, 19 May 2011 17:56:12 -0700, geremy condra
<deba...@gmail.com> wrote:
: TL;DR version: large systems have indeed been verified for their
: security properties.
: (...)
: Yup. Nothing is safe from idiots.

The difficult part is mapping those properties to actual requirements
and threat models. Formal methods do not help on that step. It takes
more than a non-idiot to avoid misunderstandings on the interface
betweeen professions.

Either way, the assumption that your system will not be handled by
idiots is only reasonable if you yourself is the only user.

--
:-- Hans Georg

harrismh777

unread,
May 20, 2011, 2:17:44 AM5/20/11
to
geremy condra wrote:
>> Anonymous, "Maximum Linux Security: A Hacker's Guide to Protecting
>> > Your Linux Server and Workstation," Indianapolis:
>> > Sams Publishing, 2000.

> This is a good volume, but very dated. I'd probably pass on it.

Actually, although dated, its still a very good manual for concepts, and
much of it... believe it or not... is still just as valid as the day it
was written.

Some things of course have changed, like web security and protocols.

Some of the linux admin stuff has now been automated with reasonable
defaults, *but not all*...

Appendix D is good-- additional resources bibliography !

Maybe try to buy or borrow a used copy [ or just skip it... ]


PS I really have hoped that Anonymous would be putting out a second
edition, but I can't find it... so not yet...


kind regards,
m harris


Steven D'Aprano

unread,
May 20, 2011, 3:04:27 AM5/20/11
to
On Fri, 20 May 2011 05:48:50 +0100, Hans Georg Schaathun wrote:

> Either way, the assumption that your system will not be handled by
> idiots is only reasonable if you yourself is the only user.

Nonsense. How do you (generic "you", not any specific person) know that
you are not an idiot?

If you are an idiot, you obviously shouldn't trust your own judgment --
although of course idiots do trust their own judgment when they
shouldn't, and the less they know, the less they realise how little they
know:

http://en.wikipedia.org/wiki/Dunning–Kruger_effect

So if you think that you're not an idiot, you might be an idiot who is
unaware of being an idiot. Your own opinion is the last opinion you
should pay attention to. The world is full of people with delusions of
superiority -- only an idiot would trust their own opinion of themselves.

You can listen to others, but only so long as you don't surround yourself
with idiots. But how do you know if the people around you are idiots? You
certainly can't trust your judgment, nor can you trust theirs. If you're
an idiot, you (still talking about generic "you") and your idiot friends
are probably all congratulating yourselves for not being idiots.

In contrast, if you're not an idiot, then you probably are aware (and if
not, you should be) of all the cognitive biases human beings are prone
to, of all the mental and emotional weaknesses that we all suffer from,
which cause us to act in idiotic ways. If you're not an idiot, then you
know your limitations, that like everyone, you can be fooled or foolish,
that you can make mistakes, that you sometimes operate equipment when you
are not at the optimum level of alertness, when your attention to detail
is below normal or you are a little more careless than you should be.

In short, that everyone, including yourself, can be an idiot, and the
more intelligent you are, the more astonishingly stupid your mistakes may
be. Any moron can accidentally burn themselves with a match, but it takes
a first-class genius to give chronic lead poisoning to tens of millions
*and* nearly destroy the ozone layer of the entire world:

http://en.wikipedia.org/wiki/Thomas_Midgley,_Jr.

So... if you think you are not an idiot, you are, and if you think you
are an idiot, you are. Either way, even if your software is only being
used by yourself, you should still attempt to make it as idiot-proof as
an idiot like yourself can make it.

--
Steven

Steven D'Aprano

unread,
May 20, 2011, 3:10:45 AM5/20/11
to
On Thu, 19 May 2011 17:56:12 -0700, geremy condra wrote:

> TL;DR version: large systems have indeed been verified for their
> security properties.

How confident are we that the verification software is sufficiently bug-
free that we should trust their results?

How confident are we that the verification software tests every possible
vulnerability, as opposed to merely every imaginable one?


--
Steven

Hans Georg Schaathun

unread,
May 20, 2011, 4:54:46 AM5/20/11
to
On 20 May 2011 07:04:27 GMT, Steven D'Aprano
<steve+comp....@pearwood.info> wrote:

: On Fri, 20 May 2011 05:48:50 +0100, Hans Georg Schaathun wrote:
:
: > Either way, the assumption that your system will not be handled by
: > idiots is only reasonable if you yourself is the only user.
:
: Nonsense. How do you (generic "you", not any specific person) know that
: you are not an idiot?

You don't, but if you are, you cannot trust any of the other assumptions
either, and making this assumption is reasonable by being less of a leap
than anything else you have done.

--
:-- Hans Georg

Disc Magnet

unread,
May 20, 2011, 5:19:04 AM5/20/11
to ty...@tysdomain.com, pytho...@python.org
On Mon, May 16, 2011 at 9:06 AM, Littlefield, Tyler <ty...@tysdomain.com> wrote:
> I'm putting lots of work into this. I would rather not have some script
> kiddy dig through it, yank out chunks and do whatever he wants. I just want
> to distribute the program as-is, not distribute it and leave it open to
> being hacked.

Obfuscating the code won't help here. Remember, "the enemy knows the system."

geremy condra

unread,
May 20, 2011, 12:26:28 PM5/20/11
to Steven D'Aprano, pytho...@python.org
On Fri, May 20, 2011 at 12:10 AM, Steven D'Aprano
<steve+comp....@pearwood.info> wrote:
> On Thu, 19 May 2011 17:56:12 -0700, geremy condra wrote:
>
>> TL;DR version: large systems have indeed been verified for their
>> security properties.
>
> How confident are we that the verification software is sufficiently bug-
> free that we should trust their results?

Pretty confident. Most formal verification systems are developed in
terms of a provably correct kernel bootstrapping the larger system.
The important thing is that since that kernel doesn't need to be
complete (only correct) it can typically be easily verified, and in
some cases exhaustively tested. There are also techniques which
generate certificates of correctness for verifiers that aren't
provably correct, but that isn't an area I know much about, and I
don't know if that gets used in practice. The bigger risk is really
that the model you're feeding it is wrong.

> How confident are we that the verification software tests every possible
> vulnerability, as opposed to merely every imaginable one?

Formal provers typically don't work by just throwing a bunch of input
at a piece of software and then certifying it. They take a set of
specifications (the model), a set of assumptions, and the program in
question, and provide a proof (in the mathematical sense) that the
program is exactly equivalent to the model given the assumptions.
Testing the assumptions and model are typically part of the
development process, though, and that's definitely a possible source
of errors.

Geremy Condra

Nobody

unread,
May 20, 2011, 1:48:10 PM5/20/11
to
On Fri, 20 May 2011 07:10:45 +0000, Steven D'Aprano wrote:

> How confident are we that the verification software tests every possible
> vulnerability,

Formal verification is based upon mathematical proof, not empirical
results.

As Dijkstra said: "Program testing can be used to show the presence of
bugs, but never to show their absence".

For complex algorithms, it may be infeasible to cover even all of the
"interesting" cases, let alone a representative sample of all possible
cases. For concurrent (multi-threaded) code, it's often impractical to
methodically test various interleavings.

harrismh777

unread,
May 20, 2011, 4:24:45 PM5/20/11
to
Steven D'Aprano wrote:
> Nonsense. How do you (generic "you", not any specific person) know that
> you are not an idiot?

lol Sum, ergo Idiot cogitat.


Reminds me of a philosophical story I heard one time from my religion
professor...

... as it goes, De Carte leads his horse into town ;-) and having
hitched it to the rail outside the local saloon and sauntering up to the
bar, the tender asks, "Would you be hav'in an ale sir?"

... De Carte replies, "I think not..." ... and then disappeared.


:)

geremy condra

unread,
May 20, 2011, 6:45:03 PM5/20/11
to harrismh777, pytho...@python.org

At risk of being pedantic, I think you mean Descartes rather than De Carte.

Geremy Condra

Steven D'Aprano

unread,
May 20, 2011, 8:54:01 PM5/20/11
to

Being a drunken old fart, I can't imagine Descartes turning down an ale...

http://www.bbc.co.uk/dna/h2g2/A3651545

--
Steven

harrismh777

unread,
May 21, 2011, 12:26:24 AM5/21/11
to
Steven D'Aprano wrote:
>>> ... as it goes, De Carte leads his horse into town;-) and having
>>> >> hitched it to the rail outside the local saloon and sauntering up to
>>> >> the bar, the tender asks, "Would you be hav'in an ale sir?"
>>> >>
>>> >> ... De Carte replies, "I think not..." ... and then disappeared.
>> >
>> > At risk of being pedantic, I think you mean Descartes rather than De
>> > Carte.
> Being a drunken old fart, I can't imagine Descartes turning down an ale...
>
> http://www.bbc.co.uk/dna/h2g2/A3651545
>
>

.. .uh, yes... playing on 'de carte before de horse...'

<sorry>

... as for Steven's link:

And Rene Descartes was a drunken old fart:
"I drink, therefore I am",

René Descartes (1596-1650)


I am not sure about Descartes drinking habits, but he was one true
philosopher and mathematician... so we honor him... with jokes !


:)


(how many of you guys are they going to be joking about 450 years from
now ?)


0 new messages