However if a program runs out of memory I should just let it crash
right? Because if not then I'd have to write exceptions everywhere to
prevent that right?
So when would I actually use try-except?
If there can be several exceptions and I just want to catch 1 or 2?
Like
try:
blahaba
except SomeError:
do something
I'm not sure whay you're trying to do, but I think catching a
ZeroDivisionError exception is a good use of try-except.
I'm also not sure that I would say you just let a program crash if it
runs out of memory. I would think that from the user perspective, you
would want to check memory conditions and come up with an exception
indicating that some memory threshold has been reached. When that
exception is raised you should indicate that to the user and exit
gracefully.
> If I get zero division error it is obv a poor solution to do try and
> except since it can be solved with an if-clause.
>
> However if a program runs out of memory I should just let it crash
> right? Because if not then I'd have to write exceptions everywhere to
> prevent that right?
>
> So when would I actually use try-except?
Whenever you can do something *meaningful* in the except block to
handle the exception (e.g. recover). Is there any meaningful action
you can take when you run out of memory ? If yes (e.g. write current
data to the disk and open a popup window that informs ths user), then
use try/except, otherwise don't. IOW code like the following
try:
...
except MemoryError:
print "You ran out of memory!"
is typically useless; better let the exception propagate to the top
level with a full traceback.
HTH,
George
A ZeroDivisionError is better avoided wth an if-clause, don't you
think? It is a predictable exception...
> A ZeroDivisionError is better avoided wth an if-clause, don't you
> think? It is a predictable exception...
It depends. If zero-division is unlikely, then things would probably[*]
run faster without checking. If speed is what you're interested in, that
is...
Glenn
[*] Haven't checked, so don't really know :-)
Basically, there's a general principle (EAFP: Easier to ask
forgiveness than permission) in Python to just "try" something and
then catch the exception if something goes wrong. This is in contrast
to e.g. C where you're supposed to "Look before you leap" (LBYL) and
check for possible error conditions before performing the operation.
One of the main advantages of the Python approach is that the
operation itself comes first in the code:
try:
a = b/c
except ZeroDivisionError:
#handle it
versus the LBYL approach:
if c == 0:
#handle error
a = b/c
where if the error handling code isn't really short, it ends up
distracting you from the operation you're actually trying to perform.
This individual case (division by 0) might not be the best example due
to its simplicity, but you get the general point.
- Chris
Many Pythonistas would disagree with that.
Anyway there are some types of errors for which catching exceptions is
more robust because there's a gap between the time something is
checked and the time it's used, between which the circumstances can
change.
For instance, the following test can be subject to sporadic failures:
if os.path.exists(filename):
f = open(filename)
Between the call to os.path.exists and the call to open, the file
could be removed by another process, which will result in an unhandled
exception. Also, sometimes files fail to open for other reasons, such
as permissions.
For things like divide-by-zero, there's no way a local value can
change between the zero test and the operation (except in uncommon
situations), so it's just a matter of style which way you do it.
Carl Banks
Of course, memory is a particularly hard example because you may not
have memory to be preparing output...
I wouldn't say that the possibility of EAFP in Python makes it
obsolute to use LBYL. (Error checking seems to be too broad a subject
to apply the One Obvious Way maxim to.) C isn't always LBYL anyway;
sometimes it's DFTCFE "Don't forget to check for errors".
I tend to use EAFP to check if something "wrong" happened (a missing
file, invalid input, etc.), and LBYL for expected conditions that can
occur with valid input, even when that condition could be tested with
a try...except. For instance, I'd write something like this:
if x is not None:
y = x.calculate_value()
else:
y = default_value
Main reason I do this is to document that None is an expected and
valid value for x, and not incidative of a problem. But it's purely a
matter of style and neither way is wrong.
Carl Banks
> I wouldn't say that the possibility of EAFP in Python makes it
> obsolute to use LBYL.
when using CPython, EAFP at the Python level always involve LBYL at the
C level. and it should be obvious to any programmer that checking for
the same thing twice is quite often a waste of time and resources.
</F>
I don't think that's true. For example, here is the code that
actually opens a file within the open() function:
if (NULL == f->f_fp && NULL != name) {
Py_BEGIN_ALLOW_THREADS
f->f_fp = fopen(name, newmode);
Py_END_ALLOW_THREADS
}
if (f->f_fp == NULL) {
Clearly it tries to open the file, and handles the error if it fails.
EAFP, even though it wasn't using an exception.
Of course, underneath fopen there could be LBYL somewhere, but that's
at either the the system library level or the OS level. Perhaps it's
part of what you meant by C level, since those guys probably are
written in C.
But then I still don't think we call say LBYP *always* occurs at the C
level, since in some cases the library and system calls that Python
wraps rely on processor exceptions and stuff like that. Which means
any LBYPing is going on inside the CPU, so it's at the hardware level.
I realize all of this is tangential to your point.
> and it should be obvious to any programmer that checking for
> the same thing twice is quite often a waste of time and resources.
Well if there's a section where performance is important I'd use
EAFP. It's no big deal.
Carl Banks