mailman has compile stuff, but isn't viewCVS pure Python?
The viewCVS exploit is detailed here
http://lwn.net/2002/0523/a/viewcvs.php3
Can some wizard kindly explain exactly how the client CGI is made
responsible for security defence against bad URLs. It seems to me that
the client browser should be responsible, but apparently not.
The alleged fix seems to involve more complete argument checking, is
that required for any such defence? What should the request response be?
--
Robin Becker
from www.python.org online docs
(http://www.python.org/doc/current/lib/cgi-security.html)
11.2.6 Caring about security
There's one important rule: if you invoke an external program (via the
os.system() or os.popen() functions. or others with similar functionality),
make very sure you don't pass arbitrary strings received from the client to
the shell. This is a well-known security hole whereby clever hackers
anywhere on the Web can exploit a gullible CGI script to invoke arbitrary
shell commands. Even parts of the URL or field names cannot be trusted,
since the request doesn't have to come from your form!
To be on the safe side, if you must pass a string gotten from a form to
a shell command, you should make sure the string contains only alphanumeric
characters, dashes, underscores, and periods.
You will probably find some security checks laying around (such as dotdot
security checks)
Erlend J. Leiknes
"Robin Becker" <ro...@jessikat.fsnet.co.uk> wrote in message
news:DuIBcWA5...@jessikat.fsnet.co.uk...
CSRF attacks have nothing to do with eval() or compile(), but by
including untrusted bits of text in HTML output without escaping them.
This means that if someone manages to input <script>...javascript
code...</script> into the program (perhaps by putting it in their CVS
checkin message), someone who comes along and views the page later
will end up running that JavaScript code.
The solution is difficult: you just have to be very careful to always
escape text of unknown provenance that's in HTML.
--amk
[Cross-site scripting exploit in ViewCVS]
> Can some wizard kindly explain exactly how the client CGI is made
> responsible for security defence against bad URLs. It seems to me that
> the client browser should be responsible, but apparently not.
I haven't read up on these kinds of exploits, but what seems to happen
in this case is that some additional content gets posted to the
application (CGI program), and due to lack of "validation", this
content gets generated by the application "as is". Since this content
is now considered by the browser to have originated from the
application (or rather, its site), the cookie information associated
with that site is available to the additional content, and when some
JavaScript in that content presents the cookie information to another
site, the browser considers this as intentional behaviour of the
application in question.
> The alleged fix seems to involve more complete argument checking, is
> that required for any such defence? What should the request response be?
Argument checking and validation is the key here - never let your
application generate inputs from untrusted sources (ie. every source)
in the form they were received, regardless of where the output is
going to be used. Having said that, it is surprising that bizarre, and
potentially illegal, URLs can be passed to servers in this way.
Paul
You have to be a little careful about expecting even minimally valid
input, since an attacker can submit invalid data. However, in this case
the victim has to follow the invalid link, so it is the browser's fault
that it submits invalid data. OTOH, I have seen numerous places where
poorly-written scripts depend on embedding <>'s in attributes, and
browsers tend to be forgiving of HTML authors' mistakes.
> The alleged fix seems to involve more complete argument checking, is
> that required for any such defence? What should the request response be?
In almost all exploits like this, the solution is to do proper quoting,
not argument checking. Otherwise you make valid input illegal.
Sometimes this "valid" input is borderline, and you may not want to
include it anyway... but filenames like "test>out" are valid (but
require quoting in the shell -- but not in open!). Many people don't
expect characters like " or <> in their input, but later on they might
be appropriate (e.g., someone entering their name as Jesse "The Body"
Ventura)
This quoting can't be done generally, as different places need different
quoting -- the most common being URL quoting, HTML quoting, shell
quoting, and SQL quoting.
PHP does try to do general quoting -- I can't remember the setting, but
it's common for it to be set up to do backslash quoting of all input.
However, this is stupid. Backslash quoting does nothing for HTML
output. You'll often see PHP-generated pages where ' is replaced with
\', usually inappropriately. It only helps in SQL and shell commands.
I find the shell to be horribly inappropriate for CGI programs anyway --
os.popen can take a list for the first argument, which is superior and
avoids most exploits (but you should be careful about -X options). SQL
quoting is obnoxious, because you often will construct a SQL statement
from multiple sources, some of which come from the user (and are
\-quoted) and some which to not. If you double-quote the user's input,
you will again get spurious \'s (since input like "joe'; arbitrary sql"
will become "joe\'; arbitrary sql" and then "'joe\\\'; arbitrary sql'")
Perl's tainting is better, but simple thoughtfulness is sufficient,
IMHO. And thorough quoting.
Ian
I don't really see what Windows has to do with security checks in
either a programming language or its libraries. You might have a point
if you were to refer to Java's security model, however.
> from www.python.org online docs
> (http://www.python.org/doc/current/lib/cgi-security.html)
> 11.2.6 Caring about security
> There's one important rule: if you invoke an external program (via the
> os.system() or os.popen() functions. or others with similar functionality),
> make very sure you don't pass arbitrary strings received from the client to
> the shell.
[...]
The problem with this advice with respect to cross-site scripting
exploits is that the advice is only partially relevant in the light of
such exploits. Yes, one should always treat untrusted input very
carefully and to try and avoid recycling that input, but the
well-known, longstanding precaution of not just passing anything
you've received to the shell doesn't in any way suggest that emitting
input data in generated HTML pages could be dangerous.
It is common knowledge that if you're writing a HTML message board
program, it may be advisible to disallow the posting of arbitrary
HTML, but this is only strikingly obvious because of the nature of the
interactions between the user and the software - they are actually
being allowed to post content onto your site. Still, such restrictions
have typically been enforced to prevent "mischief", whereas the
exploits under discussion are more serious than that.
So, even if someone were to religiously follow the above advice, and I
suppose that most developers have been doing so since around 1995,
when most people stopped even considering writing the smallest of CGI
scripts in various shell languages anyway, they could still be
surprised by the exploits being discussed here. Of course, there are
other kinds of shell-like exploits such as the ineffectual use of
SQL-quoting on untrusted data in an application, but I would argue
that the exploits being discussed here are conceptually almost
different.
I suppose the best advice is: don't allow user data to enter your
"command/instruction model" - keep data and instructions separate at
all times.
Paul
fwiw, a Swedish tabloid recently managed to log in as privileged
intraweb users on a whole bunch of commercial sites simply by
typing carefully selected SQL fragments into ordinary login boxes.
no real hacking required; just type some boilerplate SQL into
the password field, and you're in.
(some days, I wonder if programmer certification isn't such a
bad idea, after all...)
</F>