Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

file.close()

3 views
Skip to first unread message

Bryan

unread,
Jul 23, 2003, 11:17:30 PM7/23/03
to
i'm curious to know how others handle the closing of files. i seem to
always end up with this pattern even though i rarely see others do it.

f1 = file('file1')
try:
# process f1
finally:
f1.close()

which explicitly closes f1

or for multiple files:

f1 = file('file1')
try:
f2 = file('file2')
try:
# process f1 & f2
finally:
f2.close()
finally:
f1.close()

which explicitly closes f1 & f2
any exceptions opening f1 or f2 is handled outside of this structure or is
allowed to fall out of the program. i'm aware that files will automatically
be closed when the process exits. i'm just curious how others do this for
small scripts and larger programs. is what i'm doing is overkill?

thanks,

bryan


Erik Max Francis

unread,
Jul 23, 2003, 11:29:27 PM7/23/03
to
Bryan wrote:

> which explicitly closes f1 & f2
> any exceptions opening f1 or f2 is handled outside of this structure
> or is
> allowed to fall out of the program. i'm aware that files will
> automatically
> be closed when the process exits. i'm just curious how others do this
> for
> small scripts and larger programs. is what i'm doing is overkill?

No, not at all; it's safe and portable. Python the language does not
specify when objects get reclaimed, although CPython the implementation
does it promptly. Use of external resources -- which should be released
as soon as you're done with them -- are best done in try/finally
clauses.

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \ It is fatal to enter any war without the will to win it.
\__/ Douglas MacArthur

Ben Finney

unread,
Jul 23, 2003, 11:12:57 PM7/23/03
to
On Wed, 23 Jul 2003 20:29:27 -0700, Erik Max Francis wrote:
> Bryan wrote:
>> is what i'm doing is overkill?
>
> Use of external resources -- which should be released as soon as
> you're done with them -- are best done in try/finally clauses.

It seems that nesting the 'try' clauses doesn't scale well. What if
fifty files are opened? Must the nesting level of the 'try' clauses be
fifty also, to close them promptly?

--
\ "God forbid that any book should be banned. The practice is as |
`\ indefensible as infanticide." -- Dame Rebecca West |
_o__) |
http://bignose.squidly.org/ 9CFE12B0 791A4267 887F520C B7AC2E51 BD41714B

Erik Max Francis

unread,
Jul 24, 2003, 12:12:34 AM7/24/03
to
Ben Finney wrote:

> It seems that nesting the 'try' clauses doesn't scale well. What if
> fifty files are opened? Must the nesting level of the 'try' clauses
> be
> fifty also, to close them promptly?

If you're manipulating fifty files in one block, presumably you're doing
so in a uniform way:

allFiles = [...]
try:
...
finally:
for eachFile in allFiles:
eachFile.close()

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE

/ \ Man is a hating rather than a loving animal.
\__/ Rebecca West

Carl Banks

unread,
Jul 24, 2003, 12:17:25 AM7/24/03
to
Ben Finney wrote:
> On Wed, 23 Jul 2003 20:29:27 -0700, Erik Max Francis wrote:
>> Bryan wrote:
>>> is what i'm doing is overkill?
>>
>> Use of external resources -- which should be released as soon as
>> you're done with them -- are best done in try/finally clauses.
>
> It seems that nesting the 'try' clauses doesn't scale well. What if
> fifty files are opened? Must the nesting level of the 'try' clauses be
> fifty also, to close them promptly?

If you need 50 open files, you almost certainly want to have a way of
organizing them. Probably they'll be in a list or dictionary. So, if
they're in a list, for example, you can do this:


filelist = []
try:
filelist.append(open(filename[0]))
filelist.append(open(filename[1]))
...
do_something(filelist)
finally:
for f in filelist:
f.close()


--
CARL BANKS

Ben Finney

unread,
Jul 24, 2003, 12:06:00 AM7/24/03
to
Bryan (original poster) wrote:
> f1 = file('file1')
> try:
> f2 = file('file2')
> try:
> # process f1 & f2
> finally:
> f2.close()
> finally:
> f1.close()


On Wed, 23 Jul 2003 21:12:34 -0700, Erik Max Francis wrote:
> Ben Finney wrote:
>> It seems that nesting the 'try' clauses doesn't scale well. What if
>> fifty files are opened? Must the nesting level of the 'try' clauses
>> be fifty also, to close them promptly?
>
> If you're manipulating fifty files in one block, presumably you're
> doing so in a uniform way:
>
> allFiles = [...]
> try:
> ...
> finally:
> for eachFile in allFiles:
> eachFile.close()

This doesn't match Bryan's nested structure above, which you blessed as
not "overkill" (in his words). It was this that I considered a
poorly-scaling structure, or "overkill" since only one 'try' block is
required. Do you disagree?

--
\ "Too many Indians spoil the golden egg." -- Sir Joh |
`\ Bjelke-Petersen |

Erik Max Francis

unread,
Jul 24, 2003, 1:19:07 AM7/24/03
to
Ben Finney wrote:

> This doesn't match Bryan's nested structure above, which you blessed
> as
> not "overkill" (in his words). It was this that I considered a
> poorly-scaling structure, or "overkill" since only one 'try' block is
> required. Do you disagree?

It uses try/finally to secure the closing of many files in a timely
manner. In that sense, it certainly fits the pattern. It doesn't have
the same nested pattern, but try/finally isn't at issue here. If you
had code which opened 50 files and looked like:

fileOne = file(...)
fileTwo = file(...)
fileThree = file(...)
...
fileFortyNine = file(...)
fileFifty = file(...)

I would say you are doing something wrong. A more systematic handling
of many, many files is indicated whether or not you're using the
try/finally idiom to ensure files get closed in a timely manner.

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE

/ \ Together we can take this one day at a time
\__/ Sweetbox

Bryan

unread,
Jul 24, 2003, 1:20:15 AM7/24/03
to
>
> If you need 50 open files, you almost certainly want to have a way of
> organizing them. Probably they'll be in a list or dictionary. So, if
> they're in a list, for example, you can do this:
>
>
> filelist = []
> try:
> filelist.append(open(filename[0]))
> filelist.append(open(filename[1]))
> ...
> do_something(filelist)
> finally:
> for f in filelist:
> f.close()
>


erik, carl... thanks... this is exactly what i was looking for

bryan


Ben Finney

unread,
Jul 24, 2003, 1:26:15 AM7/24/03
to
On Thu, 24 Jul 2003 05:20:15 GMT, Bryan wrote:
>> filelist = []
>> try:
>> filelist.append(open(filename[0]))
>> filelist.append(open(filename[1]))
>> ...
>> do_something(filelist)
>> finally:
>> for f in filelist:
>> f.close()
>
> erik, carl... thanks... this is exactly what i was looking for

The only substantial difference I see between this and what you
originally posted, is that there is only one 'try...finally' block for
all the file open/close operations. Is this what you were wanting
clarified?

--
\ "I spent all my money on a FAX machine. Now I can only FAX |
`\ collect." -- Steven Wright |

Bryan

unread,
Jul 24, 2003, 2:13:43 AM7/24/03
to

"Ben Finney" <bignose-h...@and-benfinney-does-too.id.au> wrote in
message news:slrnbhusl6.13c.b...@iris.polar.local...

> On Thu, 24 Jul 2003 05:20:15 GMT, Bryan wrote:
> >> filelist = []
> >> try:
> >> filelist.append(open(filename[0]))
> >> filelist.append(open(filename[1]))
> >> ...
> >> do_something(filelist)
> >> finally:
> >> for f in filelist:
> >> f.close()
> >
> > erik, carl... thanks... this is exactly what i was looking for
>
> The only substantial difference I see between this and what you
> originally posted, is that there is only one 'try...finally' block for
> all the file open/close operations. Is this what you were wanting
> clarified?
>

well, it wasn't exactly clarification as much as seeing another more
scalable solution. i know a lot of people don't explicitly close files in
python, but i always do even for small scripts.

thanks again,

bryan


Ben Finney

unread,
Jul 24, 2003, 2:12:36 AM7/24/03
to
On Wed, 23 Jul 2003 22:19:07 -0700, Erik Max Francis wrote:
> Ben Finney wrote:
>> This doesn't match Bryan's nested structure above, which you blessed
>> as not "overkill" (in his words).
> It doesn't have the same nested pattern, but try/finally isn't at
> issue here.

Judging by Bryan's responses elsewhere in this thread, the multiple
nested 'try...finally' is indeed what he was asking about. The question
seems to be answered now.

--
\ "Those who will not reason, are bigots, those who cannot, are |
`\ fools, and those who dare not, are slaves." -- "Lord" George |
_o__) Gordon Noel Byron |

Francois Pinard

unread,
Jul 24, 2003, 9:19:53 AM7/24/03
to
[Bryan]

> I'm curious to know how others handle the closing of files. [...] I'm


> aware that files will automatically be closed when the process exits.

For one, I systematically avoid cluttering my code with unneeded `close'.
The advantages are simplicity and legibility, both utterly important to me.

However, I do understand that if I ever have to move a Python script
to Jython, I will have to revise my scripts for adding the clutter I am
sparing today. I'm quite accepting to do that revision if this occurs.
Until then, I prefer keeping my scripts as neat as possible.

For me, explicitely closing a file, for which the only reference is about
to disappear through function exiting, would be very similar to using
`del' on any variable I happened to use in that function: gross overkill...

The only reason to call `close' explicitly is when there is a need to close
prematurely. Absolutely no doubt that such needs exist at times. But
closing all the time "just in case" is symptomatic of unsure programming.
Or else, it is using Python while still thinking in other languages.

--
François Pinard http://www.iro.umontreal.ca/~pinard

Bryan

unread,
Jul 24, 2003, 9:48:30 PM7/24/03
to

"Francois Pinard" <pin...@iro.umontreal.ca> wrote in message
news:mailman.1059052897...@python.org...

you are correct this is so awesome... at least for me it is... now i remove
a lot of my clutter too :) i just did some tests on windows by trying to
delete file1 at the command prompt when raw_input is called.

f = file('file1')
raw_input('pause')

### the file is NOT closed


file('file1')
raw_input('pause')

### the file IS closed


f = file('file1')
del f
raw_input('pause')

### the file IS closed


def foo():
f = file('file1')

foo()
raw_input('pause')

### the file IS closed

can you explain to me how the file gets closed? i'm sure that garbage
collection hasn't happed at the point that i call raw_input. it must have
something to do with the reference count of the file object. does python
immediately call close for you when the reference count goes to zero? i
want the dirty details...

thanks,

bryan


Erik Max Francis

unread,
Jul 24, 2003, 10:57:44 PM7/24/03
to
Bryan wrote:

> you are correct this is so awesome... at least for me it is... now i
> remove
> a lot of my clutter too :) i just did some tests on windows by
> trying to
> delete file1 at the command prompt when raw_input is called.

You don't have to use so circuitous a mechanism, just define a custom
class with a __del__ method:

>>> class C:
... def __del__(self): print 'C.__del__'
...
>>> c = C()
>>> del c
C.__del__
>>> def f():
... C()
...
>>> f()
C.__del__
>>> def g():
... c = C()
...
>>> g()
C.__del__

> can you explain to me how the file gets closed? i'm sure that garbage
> collection hasn't happed at the point that i call raw_input. it must
> have
> something to do with the reference count of the file object. does
> python
> immediately call close for you when the reference count goes to zero?
> i
> want the dirty details...

Yes, the CPython implementation destructs objects as soon as the
reference count goes to zero. In the latter three cases, you're not
explicitly closing the file, but the file object has its last reference
removed (either by explicit deletion or by going out of scope in a local
block), and so the file is getting closed. You would see different
behavior in Jython, for instance, which since it is implemented in Java,
uses Java's rules for finalization (namely that it is not specified in
how timely a manner finalization occurs). Python the language -- a
bigger concept than either CPython or Jython -- leaves it unspecified
when objects are destructed.

It is _never_ a bad idea to explicitly close files, or explicitly shut
down any access to important physical resources.

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE

/ \ War is the province of chance.
\__/ Karl von Clausewitz

Delaney, Timothy C (Timothy)

unread,
Jul 24, 2003, 10:21:35 PM7/24/03
to
> From: Francois Pinard [mailto:pin...@iro.umontreal.ca]

>
> The only reason to call `close' explicitly is when there is a
> need to close
> prematurely. Absolutely no doubt that such needs exist at times. But
> closing all the time "just in case" is symptomatic of unsure
> programming.
> Or else, it is using Python while still thinking in other languages.

No - it is writing portable code that conforms to the Python language documentation. But we've had this argument before.

Even in CPython, closing before a reference goes away is vital in many cases. The simplest example is writing to a file, then reading it. You don't always want to keep a copy of the data in memory - it's often a lot better to stream the data, even if you have to do it multiple times.

Tim Delaney

Ulrich Petri

unread,
Jul 25, 2003, 3:37:46 AM7/25/03
to

"Francois Pinard" <pin...@iro.umontreal.ca> schrieb im Newsbeitrag
news:mailman.1059052897...@python.org...

fire up your python and type: "import this" read the output and rethink your
codint techniques

Ciao Ulrich


Paul Rubin

unread,
Jul 25, 2003, 3:54:38 AM7/25/03
to
Francois Pinard <pin...@iro.umontreal.ca> writes:
> For one, I systematically avoid cluttering my code with unneeded `close'.
> The advantages are simplicity and legibility, both utterly important to me.
> ...
> The only reason to call `close' explicitly is when there is a need to close
> prematurely. Absolutely no doubt that such needs exist at times. But
> closing all the time "just in case" is symptomatic of unsure programming.
> Or else, it is using Python while still thinking in other languages.

Unfortunately Python doesn't guarantee that you won't leak file
descriptors that way. CPython happens to gc them when the last
reference disappears, and your willingness to patch the code if you
move to a different implementation may be workable for you (that's
something only you can decide). But if you want to write correct code
in the current Python spec, you have to include those messy close's.

I think a much better solution would involve a language extension,
maybe a macro in some hypothetical macro extension for Python. E.g.

with f = open(frob):
[do stuff with f]

could be equivalent to:

f = open(frob)
try:
[do stuff with f]
finally:
f.done() # calls f.close()
del f

"done" here is a generic method that gets called on exiting a "with"
block.

Bengt Richter

unread,
Jul 25, 2003, 10:21:18 AM7/25/03
to
On 25 Jul 2003 00:54:38 -0700, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
[...]

>I think a much better solution would involve a language extension,
>maybe a macro in some hypothetical macro extension for Python. E.g.
>
> with f = open(frob):
> [do stuff with f]
>
>could be equivalent to:
>
> f = open(frob)
> try:
> [do stuff with f]
> finally:
> f.done() # calls f.close()
> del f
>
>"done" here is a generic method that gets called on exiting a "with"
>block.
First reaction == +1, but some questions...

1) Is f.done() really necessary? I.e., doesn't an explicit del f take care of it
if the object has been coded with a __del__ method? I.e., the idea was to get
the effect of CPython's immediate effective del on ref count going to zero, right?

2) how about with two files (or other multiple resources)?

with f1,f2 = file('f1'), file('f2'):
[do stuff with f1 & f2]

What is the canonical expansion?

f1 = file('f1')
try:
f2 = file('f2')
try:
[do stuff with f1 & f2]
finally:
del f2 # leaving f2.done() to f2.__del__
finally:
del f1 # ditto-like

or ??

Also, what about the attribute version, i.e.,

ob.f = file(frob)
try:
[do stuff with ob.f]
finally:
del ob.f # calls f.__del__, which calls/does f.close()

I.e., ob.f = something could raise an exception (e.g., a read-only property)
*after* file(frob) has succeeded. So I guess the easiest would be to limit
the left hand side to plain names...

Note that that restriction does not apply e.g. to a for loop construct:

>>> class SE(object):
... def _setp(self, val): print 'Side effect:', val
... p = property(None,_setp)
...
>>> se = SE()
>>> for se.p in range(5): pass
...
Side effect: 0
Side effect: 1
Side effect: 2
Side effect: 3
Side effect: 4

Regards,
Bengt Richter

Erik Max Francis

unread,
Jul 25, 2003, 3:10:20 PM7/25/03
to
Bengt Richter wrote:

> 1) Is f.done() really necessary? I.e., doesn't an explicit del f take
> care of it
> if the object has been coded with a __del__ method? I.e., the idea
> was to get
> the effect of CPython's immediate effective del on ref count going
> to zero, right?

It wouldn't if there were circular references at that point. If you're
going to have some kind of `with' structure that constraints lifetimes,
I'd think you'd probably want something more concrete than just object
deletion; you'd want to make sure a "Stop whatever you were doing now"
method were present and called. But maybe that really depends on the
primary thing that the `with' construct would be used for.

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE

/ \ Love is when you wake up in the morning and have a big smile.
\__/ Anggun

Hallvard B Furuseth

unread,
Jul 25, 2003, 3:51:02 PM7/25/03
to
Francois Pinard wrote:
> For one, I systematically avoid cluttering my code with unneeded `close'.

What happens if close fails during GC? Will it still raise an
exception? If so, the exception could happen at an unfortunate place in
the code.

Um. Explicit close does raise an exception if it fails, right?

--
Hallvard

Paul Rubin

unread,
Jul 25, 2003, 4:35:49 PM7/25/03
to
bo...@oz.net (Bengt Richter) writes:
> >"done" here is a generic method that gets called on exiting a "with"
> >block.
> First reaction == +1, but some questions...
>
> 1) Is f.done() really necessary? I.e., doesn't an explicit del f
> take care of it if the object has been coded with a __del__
> method? I.e., the idea was to get the effect of CPython's
> immediate effective del on ref count going to zero, right?

The ref count might not be zero. Something inside the "with" block
might make a new reference and leave it around.

> 2) how about with two files (or other multiple resources)?
>
> with f1,f2 = file('f1'), file('f2'):
> [do stuff with f1 & f2]
>
> What is the canonical expansion?

I think f1.done and f2.done should both get called.

> Also, what about the attribute version, i.e.,
>
> ob.f = file(frob)
> try:
> [do stuff with ob.f]
> finally:
> del ob.f # calls f.__del__, which calls/does f.close()
>
> I.e., ob.f = something could raise an exception (e.g., a read-only property)
> *after* file(frob) has succeeded. So I guess the easiest would be to limit
> the left hand side to plain names...

The assignment should be inside the try. If I had it on the outside
before, that was an error.

Bengt Richter

unread,
Jul 25, 2003, 5:57:44 PM7/25/03
to
On 25 Jul 2003 13:35:49 -0700, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:

>bo...@oz.net (Bengt Richter) writes:
>> >"done" here is a generic method that gets called on exiting a "with"
>> >block.
>> First reaction == +1, but some questions...
>>
>> 1) Is f.done() really necessary? I.e., doesn't an explicit del f
>> take care of it if the object has been coded with a __del__
>> method? I.e., the idea was to get the effect of CPython's
>> immediate effective del on ref count going to zero, right?
>
>The ref count might not be zero. Something inside the "with" block
>might make a new reference and leave it around.
>

Aha. In that case, is the del just a courtesy default action?

>> 2) how about with two files (or other multiple resources)?
>>
>> with f1,f2 = file('f1'), file('f2'):
>> [do stuff with f1 & f2]
>>
>> What is the canonical expansion?
>
>I think f1.done and f2.done should both get called.

Consistently ;-)

>
>> Also, what about the attribute version, i.e.,
>>
>> ob.f = file(frob)
>> try:
>> [do stuff with ob.f]
>> finally:
>> del ob.f # calls f.__del__, which calls/does f.close()
>>
>> I.e., ob.f = something could raise an exception (e.g., a read-only property)
>> *after* file(frob) has succeeded. So I guess the easiest would be to limit
>> the left hand side to plain names...
>
>The assignment should be inside the try. If I had it on the outside
>before, that was an error.

Really? Don't you want a failing f = file(frob) exception to skip the finally,
since f might not even be bound to anything in that case?

The trouble I was pointing to is having two possible causes for exception, and only
one of them being of relevance to the file resource.

Maybe if you wanted to go to the trouble, it could be split something like

with ob.f = file(frob):
[do stuff with ob.f]

becoming

_tmp = file(frob)
try:
ob.f = _tmp
[ do stuff with ob.f ]
finally:
_tmp.done()
del _tmp
del ob.f

Not sure about the order. Plain ob.f.done() would not guarantee a call to _tmp.done(),
since ob could refuse to produce f, and f could be a wrapper produced during ob.f = _tmp,
and it might not have a __del__ method etc etc. _tmp is just for some internal temp binding.

Regards,
Bengt Richter

Erik Max Francis

unread,
Jul 27, 2003, 7:31:19 PM7/27/03
to
Dennis Lee Bieber wrote:

> Hallvard B Furuseth fed this fish to the penguins on Friday 25 July
> 2003 12:51 pm:


>
> > Um. Explicit close does raise an exception if it fails, right?
>

...
> Looks like it doesn't care... As long as the file /had/ been
> opened
> first (doesn't look like the interactive interpreter collected f until
> the del either).

I don't think you've demonstrated that; all you've shown is that builtin
Python file objects make file closing idempotent. You haven't
demonstrated a case where there actually is an I/O error that occurs
when .close gets called. In particular, I _really_ don't know what you
meant to show with this snippet, as it really has nothing to do with
files at all:

> >>> f
> <closed file 't.t', mode 'w' at 0x809dc88>
> >>> del f
> >>> f.close()
> Traceback (most recent call last):
> File "<stdin>", line 1, in ?
> NameError: name 'f' is not defined

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE

/ \ He who knows how to be poor knows everything.
\__/ Jules Michelet

Jeff Epler

unread,
Jul 27, 2003, 10:26:22 PM7/27/03
to
On Sun, Jul 27, 2003 at 04:31:19PM -0700, Erik Max Francis wrote:
> You haven't
> demonstrated a case where there actually is an I/O error that occurs
> when .close gets called.

What makes you believe that a Python file object's "close" can never error?
"close" corresponds to the fclose() function of the C standard library,
and the manpages have plenty to say on the subject (see below).

I just created a small filesystem (1000 1k blocks, over half of which
was used by filesystem overhead) and got Python to do this:
>>> f = open("/mnt/tmp/x", "w")
>>> f.write("x" * (453*1024+511))


>>> f.close()
Traceback (most recent call last):
File "<stdin>", line 1, in ?

IOError: [Errno 28] No space left on device

Curiously enough, this doesn't even print the 'Exception in __del__ ignored' message:
>>> f = open("/mnt/tmp/x", "w")
>>> f.write("x" * (453*1024+511))
>>> del f
even though the failed stdio fclose must still have been called.

stdio buffering kept the last few bytes of the f.write() from actually
being sent to the disk, but the f.close() call must dump them to the disk,
at which time the "disk full" condition is actually seen. While I had
to contrive this situation for this message, it's exactly the kind of
thing that will to happen to you when your software is at the customer's
site and you've left for a month in rural Honduras where there aren't
any easily-accessible phones, let alone easy internet access.

Jeff

[from `man fclose']
RETURN VALUE
Upon successful completion 0 is returned. Otherwise, EOF is returned
and the global variable errno is set to indicate the error. In either
case any further access (including another call to fclose()) to the
stream results in undefined behaviour.
[...]
The fclose function may also fail and set errno for any of the errors
specified for the routines close(2), write(2) or fflush(3).

[from `man close']
ERRORS
EBADF fd isn’t a valid open file descriptor.

EINTR The close() call was interrupted by a signal.

EIO An I/O error occurred.

[from `man write']
ERRORS
EBADF fd is not a valid file descriptor or is not open for writing.

EINVAL fd is attached to an object which is unsuitable for writing.

EFAULT buf is outside your accessible address space.

EFBIG An attempt was made to write a file that exceeds the implementa-
tion-defined maximum file size or the process’ file size limit,
or to write at a position past than the maximum allowed offset.

EPIPE fd is connected to a pipe or socket whose reading end is closed.
When this happens the writing process will also receive a SIG-
PIPE signal. (Thus, the write return value is seen only if the
program catches, blocks or ignores this signal.)

EAGAIN Non-blocking I/O has been selected using O_NONBLOCK and the
write would block.

EINTR The call was interrupted by a signal before any data was writ-
ten.

ENOSPC The device containing the file referred to by fd has no room for
the data.

EIO A low-level I/O error occurred while modifying the inode.

Other errors may occur, depending on the object connected to fd.

[from `man fflush']
ERRORS
EBADF Stream is not an open stream, or is not open for writing.

The function fflush may also fail and set errno for any of the errors
specified for the routine write(2).

Erik Max Francis

unread,
Jul 27, 2003, 11:24:21 PM7/27/03
to
Jeff Epler wrote:

> On Sun, Jul 27, 2003 at 04:31:19PM -0700, Erik Max Francis wrote:
>
> > You haven't
> > demonstrated a case where there actually is an I/O error that occurs
> > when .close gets called.
>
> What makes you believe that a Python file object's "close" can never
> error?
> "close" corresponds to the fclose() function of the C standard
> library,
> and the manpages have plenty to say on the subject (see below).

I never made any such claim. I was simply countering someone _else_
making that claim, who used a Python session snippet to try to
demonstrate it, that they had demonstrated no such thing. He simply
called the close method twice, which had gave no indication of what he
was looking for.

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE

/ \ What do women want?
\__/ Sigmund Freud

0 new messages