Native raw IO and FILE* wrappers?

279 views
Skip to first unread message

Olaf van der Spek

unread,
Nov 16, 2014, 8:24:26 AM11/16/14
to std-pr...@isocpp.org
Hi,

Would there be any interest in wrappers for native raw (file) IO and FILE*?

Sometimes one needs native handles or FILE* for interoperability or special features like memory mapped IO, but standard C++ doesn't provide convenient RAII friendly wrappers for these.
The idea seems trivial yet quite useful.
What do you think?

Grtz,

Olaf

Vicente J. Botet Escriba

unread,
Nov 16, 2014, 9:11:03 AM11/16/14
to std-pr...@isocpp.org
Le 16/11/14 14:24, Olaf van der Spek a écrit :
Hi,

There is an on going proposal
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3949.pdf, in
case this is what you are looking for.

Vicente

Olaf van der Spek

unread,
Nov 17, 2014, 6:47:25 PM11/17/14
to std-pr...@isocpp.org


On Sunday, November 16, 2014 3:11:03 PM UTC+1, Vicente J. Botet Escriba wrote:
There is an on going proposal
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3949.pdf, in
case this is what you are looking for.


It's not really, but thx. It does take care of the RAII part but it doesn't standardize a raw IO interface.

David Krauss

unread,
Nov 17, 2014, 8:27:37 PM11/17/14
to std-pr...@isocpp.org

On 2014–11–18, at 7:47 AM, Olaf van der Spek <olafv...@gmail.com> wrote:

It's not really, but thx. It does take care of the RAII part but it doesn't standardize a raw IO interface.

FILE is buffered, not raw I/O. std::filebuf is the replacement. I’m not sure why there’s no sync_with_stdio and FILE access for buffers besides the three standard streams, but surely it was considered back when <iostream> was standardized. Perhaps it would after all make sense to have a separate class stdiobuf, but such a feature sounds fatally unsexy. IIRC you can find such a thing as a documented extra feature in the GNU library, and likely in others too. I don’t think it would make sense to provide any other C++ interface to FILE.

Raw file descriptors have widely varying semantics so it’s hard to do anything really reliably without constraining an FD to a particular kind of resource… including close(), which negates the value of using scoped_resource so generically.

Olaf van der Spek

unread,
Nov 18, 2014, 4:50:47 AM11/18/14
to std-pr...@isocpp.org
On Tue, Nov 18, 2014 at 2:27 AM, David Krauss <pot...@gmail.com> wrote:
> FILE is buffered, not raw I/O.

I know, my idea is to offer two separate classes. One for native IO
and one for FILE.

> std::filebuf is the replacement. I’m not sure
> why there’s no sync_with_stdio and FILE access for buffers besides the three
> standard streams, but surely it was considered back when <iostream> was
> standardized. Perhaps it would after all make sense to have a separate class
> stdiobuf, but such a feature sounds fatally unsexy. IIRC you can find such a
> thing as a documented extra feature in the GNU library, and likely in others
> too. I don’t think it would make sense to provide any other C++ interface to
> FILE.

Some C libs use FILE paramters. You can't get a FILE* from a
std::filebuf can you?

> Raw file descriptors have widely varying semantics so it’s hard to do
> anything really reliably without constraining an FD to a particular kind of
> resource… including close(), which negates the value of using
> scoped_resource so generically.

My idea was to focus on files. What differences in semantics would be
showstoppers for such a wrapper?
Do you think standard unbuffered file IO does not make sense?

--
Olaf

David Krauss

unread,
Nov 18, 2014, 6:15:30 PM11/18/14
to std-pr...@isocpp.org
On 2014–11–18, at 5:50 PM, Olaf van der Spek <olafv...@gmail.com> wrote:

Some C libs use FILE paramters. You can't get a FILE* from a
std::filebuf can you?

That’s what I mean by “class stdiobuf.” It would be compliant to implement filebuf like this, but likely poor QOI. The problem is that streambuf exposes a raw memory interface, and the only way to let its internal pointers update stdio would be to write the C library around the C++ library.

GNU stdio_filebuf is a thin wrapper providing direct access to a FILE*, but it’s unsynchronized. On the other hand their stdio_sync_filebuf is synchronized but that means bypassing the C++ buffering, and given a cursory reading of the source, it doesn’t use the locale codecvt either.

From the user’s perspective, anyway, <stdio.h> and FILE do exactly the same sort of buffering as filebuf, so a new interface wouldn’t be justified.

Raw file descriptors have widely varying semantics so it’s hard to do
anything really reliably without constraining an FD to a particular kind of
resource… including close(), which negates the value of using
scoped_resource so generically.

My idea was to focus on files. What differences in semantics would be
showstoppers for such a wrapper?
Do you think standard unbuffered file IO does not make sense?

POSIX close has a lot of edge cases. Depending on the kind of resource, it might block or the file descriptor may not be freed for reuse immediately. It might even fail and require a retry, on some systems.

FDs cover raw devices, memory maps, network connections, etc. For unbuffered files, we already have std::filebuf::pubsetbuf. Offering a FD wrapper supporting only files would be misleading to the user.

Jim Porter

unread,
Nov 18, 2014, 8:57:02 PM11/18/14
to std-pr...@isocpp.org
On 11/16/2014 7:24 AM, Olaf van der Spek wrote:
> Hi,
>
> Would there be any interest in wrappers for native raw (file) IO and FILE*?

I'd be very interested in wrappers for native file handles (file
descriptors and Windows HANDLEs). I brought this up on the discussion
list here:
<https://groups.google.com/a/isocpp.org/forum/#!topic/std-discussion/macDvhFDrjU>.
I haven't had the chance to write up a proposal yet, though.

- Jim


Olaf van der Spek

unread,
Nov 19, 2014, 6:05:37 AM11/19/14
to std-pr...@isocpp.org
On Wed, Nov 19, 2014 at 12:13 AM, David Krauss <pot...@gmail.com> wrote:
> From the user’s perspective, anyway, <stdio.h> and FILE do exactly the same
> sort of buffering as filebuf, so a new interface wouldn’t be justified.

That doesn't solve the problem of interoperability with FILE* (in a
nice C++ way).

> POSIX close has a lot of edge cases. Depending on the kind of resource, it
> might block or the file descriptor may not be freed for reuse immediately.

How's that a problem?
If it is a problem, hasn't it already been solved for FILE and fstream close()?


> It might even fail and require a retry, on some systems.

Are you sure?
Linux man says: "Note that the return value should only be used for
diagnostics. In particular close() should not be retried after an
EINTR since this may cause a reused descriptor from another thread to
be closed."

> FDs cover raw devices, memory maps, network connections, etc. For unbuffered
> files, we already have std::filebuf::pubsetbuf. Offering a FD wrapper
> supporting only files would be misleading to the user.

If it happens to support other stuff that'd be fine with me.


--
Olaf

David Krauss

unread,
Nov 19, 2014, 10:18:17 AM11/19/14
to std-pr...@isocpp.org

On 2014–11–19, at 7:05 PM, Olaf van der Spek <olafv...@gmail.com> wrote:

> That doesn't solve the problem of interoperability with FILE* (in a
> nice C++ way).

I think the GNU solutions are about as good as we can hope for. What else do you intend to gain?

>> POSIX close has a lot of edge cases. Depending on the kind of resource, it
>> might block or the file descriptor may not be freed for reuse immediately.
>
> How's that a problem?
> If it is a problem, hasn't it already been solved for FILE and fstream close()?

It’s a problem that the user can accidentally pass something besides an ordinary file.

It’s not solved in the standard library for things that have pathnames but shouldn’t be buffered (e.g. in /dev).

Creating an access interface that doesn’t require a pathname, and therefore is specifically suited to things without names or that aren’t definitely owned by the C++ object, is pure danger.

>> It might even fail and require a retry, on some systems.
>
> Are you sure?
> Linux man says: "Note that the return value should only be used for
> diagnostics. In particular close() should not be retried after an
> EINTR since this may cause a reused descriptor from another thread to
> be closed.”

It says that to contrast with other operating systems. IIRC Solaris is an offender. There’s plenty of literature on the general topic, for example http://stackoverflow.com/q/22603025/153285 .

>> Offering a FD wrapper
>> supporting only files would be misleading to the user.
>
> If it happens to support other stuff that'd be fine with me.

For other stuff the user would have the appearance of support but actual lurking errors in buffering and portability.

Ville Voutilainen

unread,
Nov 19, 2014, 10:39:34 AM11/19/14
to std-pr...@isocpp.org
On 19 November 2014 17:18, David Krauss <pot...@gmail.com> wrote:
>> How's that a problem?
>> If it is a problem, hasn't it already been solved for FILE and fstream close()?
> It’s a problem that the user can accidentally pass something besides an ordinary file.
> It’s not solved in the standard library for things that have pathnames but shouldn’t be buffered (e.g. in /dev).

Well, if a user decides to create a buffered stream on top of such a
descriptor, why
should we prevent that? You can already do that in posix with fdopen and there's
no particular protection against that, and it's questionable whether
any protection
is necessary.

> Creating an access interface that doesn’t require a pathname, and therefore is specifically suited to things without names or that aren’t definitely owned by the C++ object, is pure danger.

What danger?

>>> Offering a FD wrapper
>>> supporting only files would be misleading to the user.
>>
>> If it happens to support other stuff that'd be fine with me.
> For other stuff the user would have the appearance of support but actual lurking errors in buffering and portability.

Also, for other stuff, the user would gain the ability to use C++
iostreams on top of a wide
variety of things where it would work, and does work with existing
extensions just fine, like pipes.

David Krauss

unread,
Nov 20, 2014, 11:11:50 PM11/20/14
to std-pr...@isocpp.org
On 2014–11–19, at 11:39 PM, Ville Voutilainen <ville.vo...@gmail.com> wrote:

On 19 November 2014 17:18, David Krauss <pot...@gmail.com> wrote:
How's that a problem?
If it is a problem, hasn't it already been solved for FILE and fstream close()?
It’s a problem that the user can accidentally pass something besides an ordinary file.
It’s not solved in the standard library for things that have pathnames but shouldn’t be buffered (e.g. in /dev).

Well, if a user decides to create a buffered stream on top of such a
descriptor, why
should we prevent that? You can already do that in posix with fdopen and there's
no particular protection against that, and it's questionable whether
any protection
is necessary.

POSIX isn’t a role model for safety.

Creating an access interface that doesn’t require a pathname, and therefore is specifically suited to things without names or that aren’t definitely owned by the C++ object, is pure danger.

What danger?

Danger of continuing to use the FD or FILE despite the assumed ownership by the C++ object, or after the C++ object is destroyed. The implicit close is going to be surprising. “Yes, you can use your C resource with C++, but then you can’t use it with C again without reopening it using the path/URL.”

Danger of buffering something that shouldn’t be buffered, like a socket or a pipe. basic_filebuf may assume end-of-file when a read operation fails to fill the entire buffer, but that doesn’t hold true for such resources. Actual behavior may be platform-specific.

Also, for other stuff, the user would gain the ability to use C++
iostreams on top of a wide
variety of things where it would work, and does work with existing
extensions just fine, like pipes.

An unbuffered and non-owning descriptor streambuf might be reasonable. Adopting existing extensions is a better idea than adopting certain capabilities of existing extensions into existing facilities, or reinventing the wheel.

Raw I/O is often done asynchronously, though, so it might be better to extend Networking TS facilities to cover local devices and files than to start with a raw streambuf and try to lump FILE in the same extension. Boost.ASIO isn’t specifically a network library, in the first place. I’ve not used ASIO nor reviewed the TS so I can’t really speak to that, though. I see that it does have a socket_streambuf interface to iostreams, though.

Ville Voutilainen

unread,
Nov 20, 2014, 11:38:00 PM11/20/14
to std-pr...@isocpp.org
On 21 November 2014 06:11, David Krauss <pot...@gmail.com> wrote:
> It’s a problem that the user can accidentally pass something besides an
> ordinary file.
> It’s not solved in the standard library for things that have pathnames but
> shouldn’t be buffered (e.g. in /dev).
>
>
> Well, if a user decides to create a buffered stream on top of such a
> descriptor, why
> should we prevent that? You can already do that in posix with fdopen and
> there's
> no particular protection against that, and it's questionable whether
> any protection
> is necessary.
>
>
> POSIX isn’t a role model for safety.

I didn't claim it was. I asked why we should prevent such opening of a stream
on top of a descriptor.

> What danger?
>
>
> Danger of continuing to use the FD or FILE despite the assumed ownership by
> the C++ object, or after the C++ object is destroyed. The implicit close is

Which doesn't sound different from the "danger" of continuing to use a raw
pointer despite the assumed ownership by a unique_ptr.

> going to be surprising. “Yes, you can use your C resource with C++, but then

The implicit close is going to be what people expect, if we're going to fling
out anecdotal guesses with regards to what will happen amongst random
users.

> Danger of buffering something that shouldn’t be buffered, like a socket or a
> pipe. basic_filebuf may assume end-of-file when a read operation fails to
> fill the entire buffer, but that doesn’t hold true for such resources.

I fail to see how the unlikely potential of incorrect implementation is a reason
not to add the facility.

> Raw I/O is often done asynchronously, though, so it might be better to
> extend Networking TS facilities to cover local devices and files than to
> start with a raw streambuf and try to lump FILE in the same extension.

I don't quite recall anyone suggesting "starting with a raw streambuf
and lumping FILE in the same extension", so I don't quite grasp how
we got there.

David Krauss

unread,
Nov 21, 2014, 12:10:35 AM11/21/14
to std-pr...@isocpp.org
On 2014–11–21, at 12:37 PM, Ville Voutilainen <ville.vo...@gmail.com> wrote:

On 21 November 2014 06:11, David Krauss <pot...@gmail.com> wrote:
Danger of continuing to use the FD or FILE despite the assumed ownership by
the C++ object, or after the C++ object is destroyed. The implicit close is

Which doesn't sound different from the "danger" of continuing to use a raw
pointer despite the assumed ownership by a unique_ptr.

The purpose of unique_ptr is to express and implement ownership. Filebuf and iostreams provide buffering and parsing, and the motivation in this case is to glue together existing libraries in C and C++. Shared ownership would be essential to usefulness, never mind what’s surprising.

Danger of buffering something that shouldn’t be buffered, like a socket or a
pipe. basic_filebuf may assume end-of-file when a read operation fails to
fill the entire buffer, but that doesn’t hold true for such resources.

I fail to see how the unlikely potential of incorrect implementation is a reason
not to add the facility.

That’s not an incorrect implementation, it’s the current norm.

Sockets are buffered by the OS socket library, not by any user-level class. There’s no way to correctly buffer sockets by yourself or to put one inside a FILE.

Raw I/O is often done asynchronously, though, so it might be better to
extend Networking TS facilities to cover local devices and files than to
start with a raw streambuf and try to lump FILE in the same extension.

I don't quite recall anyone suggesting "starting with a raw streambuf
and lumping FILE in the same extension", so I don't quite grasp how
we got there.

You mentioned that existing extensions exist to handle e.g. pipes, although you didn’t mention any. GNU stdio_sync_streambuf is one. That general avenue could be pursued, but I think Boost.ASIO could be more fertile ground.

Ville Voutilainen

unread,
Nov 21, 2014, 12:44:50 AM11/21/14
to std-pr...@isocpp.org
On 21 November 2014 07:10, David Krauss <pot...@gmail.com> wrote:
> Which doesn't sound different from the "danger" of continuing to use a raw
> pointer despite the assumed ownership by a unique_ptr.
>
>
> The purpose of unique_ptr is to express and implement ownership. Filebuf and
> iostreams provide buffering and parsing, and the motivation in this case is

And they also implement resource ownership, and have done so since
before unique_ptr or even auto_ptr were introduced.

> to glue together existing libraries in C and C++. Shared ownership would be
> essential to usefulness, never mind what’s surprising.

Motivation in this case is to be able to open the underlying platform facility
and wrap it under an iostream, not the sharing of the descriptor with
platform-specific
code.

>
> Danger of buffering something that shouldn’t be buffered, like a socket or a
> pipe. basic_filebuf may assume end-of-file when a read operation fails to
> fill the entire buffer, but that doesn’t hold true for such resources.
>
>
> I fail to see how the unlikely potential of incorrect implementation is a
> reason
> not to add the facility.
>
>
> That’s not an incorrect implementation, it’s the current norm.

Funny how that current norm is not the current norm - if my fstream
over a networked
file system file fails to fill the entire buffer, it will block, not eof-close.

> Sockets are buffered by the OS socket library, not by any user-level class.
> There’s no way to correctly buffer sockets by yourself or to put one inside
> a FILE.

I must say I have no idea what you're talking about here.

Do you have actual reasons why we shouldn't provide iostreams over native
descriptors?

David Krauss

unread,
Nov 21, 2014, 1:01:52 AM11/21/14
to std-pr...@isocpp.org

On 2014–11–21, at 1:44 PM, Ville Voutilainen <ville.vo...@gmail.com> wrote:

> On 21 November 2014 07:10, David Krauss <pot...@gmail.com> wrote:
>
>> to glue together existing libraries in C and C++. Shared ownership would be
>> essential to usefulness, never mind what’s surprising.
>
> Motivation in this case is to be able to open the underlying platform facility
> and wrap it under an iostream, not the sharing of the descriptor with
> platform-specific
> code.

Who said the other code is platform-specific? It could be anything, like an XML library written in C.

> if my fstream
> over a networked
> file system file fails to fill the entire buffer, it will block, not elf-close.

The networked filesystem will behave as any other filesystem. It will cause read() to block.

Socket and pipe readers usually follow protocols that require response before some buffer of unknown size happens to fill. User-side buffering is a misfeature.

>> Sockets are buffered by the OS socket library, not by any user-level class.
>> There’s no way to correctly buffer sockets by yourself or to put one inside
>> a FILE.
>
> I must say I have no idea what you're talking about here.

Do you know how pipes and BSD sockets get buffered?

> Do you have actual reasons why we shouldn't provide iostreams over native
> descriptors?

I’ve said all I need to say. I’ve also mentioned various alternatives and existing extensions, which doesn’t really jibe with the notion that “we shouldn’t provide iostreams over native descriptors.”

Ville Voutilainen

unread,
Nov 21, 2014, 1:15:01 AM11/21/14
to std-pr...@isocpp.org
On 21 November 2014 08:01, David Krauss <pot...@gmail.com> wrote:
>>> to glue together existing libraries in C and C++. Shared ownership would be
>>> essential to usefulness, never mind what’s surprising.
>>
>> Motivation in this case is to be able to open the underlying platform facility
>> and wrap it under an iostream, not the sharing of the descriptor with
>> platform-specific
>> code.
>
> Who said the other code is platform-specific? It could be anything, like an XML library written in C.

Well, if you want an additional facility where you can wrap an
iostream over a FILE* or
a native descriptor without the stream closing the underlying handle,
by all means. That
doesn't mean we shouldn't pursue an iostream wrapper over a handle for
the cases where
the stream closes the handle.

>> file system file fails to fill the entire buffer, it will block, not elf-close.
>
> The networked filesystem will behave as any other filesystem. It will cause read() to block.

I fail to see how that's different from pipes.

> Socket and pipe readers usually follow protocols that require response before some buffer of unknown size happens to fill. User-side buffering is a misfeature.

I don't know what you're talking about here.

>>> Sockets are buffered by the OS socket library, not by any user-level class.
>>> There’s no way to correctly buffer sockets by yourself or to put one inside
>>> a FILE.
>> I must say I have no idea what you're talking about here.
> Do you know how pipes and BSD sockets get buffered?

I think I do. And I still don't see why there's "no way" to buffer
them in userspace
on top of the system buffering.

>> Do you have actual reasons why we shouldn't provide iostreams over native
>> descriptors?
>
> I’ve said all I need to say. I’ve also mentioned various alternatives and existing extensions, which doesn’t really jibe with the notion that “we shouldn’t provide iostreams over native descriptors.”


Well, I have trouble following what you're trying to say. You
certainly seemed to be
against the idea of being able to wrap a closing-iostream on top of a
native handle,
and I have thus far failed to see your explaining sufficiently well
why that would be a
bad idea.

David Krauss

unread,
Nov 21, 2014, 1:34:52 AM11/21/14
to std-pr...@isocpp.org
On 2014–11–21, at 2:14 PM, Ville Voutilainen <ville.vo...@gmail.com> wrote:

Sockets are buffered by the OS socket library, not by any user-level class.
There’s no way to correctly buffer sockets by yourself or to put one inside
a FILE.
I must say I have no idea what you're talking about here.
Do you know how pipes and BSD sockets get buffered?

I think I do. And I still don't see why there's "no way" to buffer
them in userspace
on top of the system buffering.

“On top of system buffering” is not “by yourself,” and the buffering scheme is not the same as FILE and filebuf.

Well, I have trouble following what you're trying to say.

It’s useful to put streambuf or iostreams atop more things, but treating all file descriptors alike is an overgeneralization. Treating them all like normal local files doubly so.

So, the proposed facility might be split across more than one class, and/or it might require the user to specify what kind of descriptor something is. At the least, so there’s some kind of contract on whether closing it signifies a flush or an abort.

You
certainly seemed to be
against the idea of being able to wrap a closing-iostream on top of a
native handle,
and I have thus far failed to see your explaining sufficiently well
why that would be a
bad idea.

Because the user might want to continue using the handle after destroying the iostream, as they likely were before before creating it.

Ville Voutilainen

unread,
Nov 21, 2014, 1:56:06 AM11/21/14
to std-pr...@isocpp.org
On 21 November 2014 08:34, David Krauss <pot...@gmail.com> wrote:
> Do you know how pipes and BSD sockets get buffered?
>
>
> I think I do. And I still don't see why there's "no way" to buffer
> them in userspace
> on top of the system buffering.
>
>
> “On top of system buffering” is not “by yourself,” and the buffering scheme
> is not the same as FILE and filebuf.

Well, I don't think anyone has suggested that these streams magically replace
the system buffering. I don't see anyone explaining why this on-top-of buffering
is in any way incorrect, but I see claims stating that it is.

> Well, I have trouble following what you're trying to say.
>
>
> It’s useful to put streambuf or iostreams atop more things, but treating all
> file descriptors alike is an overgeneralization. Treating them all like
> normal local files doubly so.

Nobody has suggested treating them like local files. With regards to
allowing the suggested wrapping of any file descriptor, I'm yet to see
an explanation why it shouldn't be done. All I see is vague statements
that don't seem to provide anything beyond unsubstantiated concerns.

> So, the proposed facility might be split across more than one class, and/or
> it might require the user to specify what kind of descriptor something is.
> At the least, so there’s some kind of contract on whether closing it
> signifies a flush or an abort.

Sure, it's possible that the idea will evolve to cover multiple scenarios
and will lead to more than one class. Nobody has thus far suggested that
it will just magically happen inside the existing filebuf.

> You
> certainly seemed to be
> against the idea of being able to wrap a closing-iostream on top of a
> native handle,
> and I have thus far failed to see your explaining sufficiently well
> why that would be a
> bad idea.
>
>
> Because the user might want to continue using the handle after destroying
> the iostream, as they likely were before before creating it.

The closing-iostream (or a streambuf, rather) doesn't preclude having
a non-closing one,
so that's not a reason not to add a closing-iostream on top of a native handle.

Zhihao Yuan

unread,
Nov 21, 2014, 2:14:20 AM11/21/14
to std-pr...@isocpp.org
On Fri, Nov 21, 2014 at 1:14 AM, Ville Voutilainen
<ville.vo...@gmail.com> wrote:
>>
>> The networked filesystem will behave as any other filesystem. It will cause read() to block.
>
> I fail to see how that's different from pipes.

Traditional pipes are half-duplex, while sockets are
full-duplex.

>
> Well, I have trouble following what you're trying to say. You
> certainly seemed to be
> against the idea of being able to wrap a closing-iostream on top of a
> native handle,
> and I have thus far failed to see your explaining sufficiently well
> why that would be a
> bad idea.
>

The idea works for some use cases, but a stream-like
interface is too constrained for exposing the usefulness
of these non-regular file resources, afaics. I hope I can
see subprocess libraries, network libraries standardized
some day, but not quite interested in this.

--
Zhihao Yuan, ID lichray
The best way to predict the future is to invent it.
___________________________________________________
4BSD -- http://bit.ly/blog4bsd

Ville Voutilainen

unread,
Nov 21, 2014, 3:06:17 AM11/21/14
to std-pr...@isocpp.org, Olaf van der Spek
On 21 November 2014 09:14, Zhihao Yuan <z...@miator.net> wrote:
>>> The networked filesystem will behave as any other filesystem. It will cause read() to block.
>>
>> I fail to see how that's different from pipes.
>
> Traditional pipes are half-duplex, while sockets are
> full-duplex.

Gee, it's almost as if you're suggesting that the ends of the pipe are
ifstreams and
ofstreams and the socket ends are fstreams. That's no different from
opening a file
descriptor in read-only mode and trying to write to it. That's not
weird at all, the
errors are not exceptional, and users know how to deal with it.

>> Well, I have trouble following what you're trying to say. You
>> certainly seemed to be
>> against the idea of being able to wrap a closing-iostream on top of a
>> native handle,
>> and I have thus far failed to see your explaining sufficiently well
>> why that would be a
>> bad idea.
> The idea works for some use cases, but a stream-like
> interface is too constrained for exposing the usefulness
> of these non-regular file resources, afaics. I hope I can
> see subprocess libraries, network libraries standardized
> some day, but not quite interested in this.


The idea seems to cover a vast swath of cases for which users have needed
to use platform-specific facilities for all of it, including the i/o
itself. Beyond
ioctls, there doesn't seem to be too many of these "useful facilities" that need
to be done after opening the descriptor.

Olaf, I encourage you to proceed with the proposal - regardless of what the
non-representative comments on this forum say, users have been requesting
the proposed facility for over a decade, and C++ has failed to deliver.

Jim Porter

unread,
Nov 21, 2014, 3:33:18 AM11/21/14
to std-pr...@isocpp.org
On 11/21/2014 2:06 AM, Ville Voutilainen wrote:
> Olaf, I encourage you to proceed with the proposal - regardless of what the
> non-representative comments on this forum say, users have been requesting
> the proposed facility for over a decade, and C++ has failed to deliver.

I plan to write up a proposal for an I/O streams type to work with
native file handles (mainly file descriptors, but also Windows HANDLEs),
but I certainly wouldn't say no to some help.

I'm not sure I find the FILE* case especially useful, but I suppose I
could be convinced otherwise if I saw some use cases.

- Jim


Matthew Woehlke

unread,
Nov 21, 2014, 11:03:23 AM11/21/14
to std-pr...@isocpp.org
On 2014-11-21 01:14, Ville Voutilainen wrote:
> On 21 November 2014 08:01, David Krauss <pot...@gmail.com> wrote:
>> Do you know how pipes and BSD sockets get buffered?
>
> I think I do. And I still don't see why there's "no way" to buffer
> them in userspace on top of the system buffering.

I think the point here is that the userland buffer had better not block
trying to read more bytes than the user has explicitly requested, since
that is clearly undesired behavior. (It can even lead to deadlocks in
the case of a pipe that is waiting for some response before sending more
data.)

Lets say you have processes A and B connected by a pipe. A sends 52
bytes. B tries to read 48, then sends a reply to A that causes A to send
a bunch more. If B is buffering the pipe, that buffer had better not
decide that it reads 4096 byte chunks from the FD at a time and will not
return from any read request until either that 4096 byte buffer is full
or an EOF occurs. That may be okay for files on disk, but for pipes /
sockets, you just caused B to deadlock.

In other words, the buffer must both accept (without blocking) that it
may not be able to read as many bytes as it would like, AND must not
treat that condition as EOF (because it isn't). It's entirely plausible
that a naïve implementation that was designed for handling local files
fails at one or both of those points.

--
Matthew

Reply all
Reply to author
Forward
0 new messages